00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 1907 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3173 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.061 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.062 The recommended git tool is: git 00:00:00.062 using credential 00000000-0000-0000-0000-000000000002 00:00:00.064 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.101 Fetching changes from the remote Git repository 00:00:00.103 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.155 Using shallow fetch with depth 1 00:00:00.155 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.155 > git --version # timeout=10 00:00:00.212 > git --version # 'git version 2.39.2' 00:00:00.213 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.256 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.256 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.999 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.011 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.022 Checking out Revision bdda68d1e41499f94b336830106e36e3602574f3 (FETCH_HEAD) 00:00:07.022 > git config core.sparsecheckout # timeout=10 00:00:07.033 > git read-tree -mu HEAD # timeout=10 00:00:07.048 > git checkout -f bdda68d1e41499f94b336830106e36e3602574f3 # timeout=5 00:00:07.067 Commit message: "jenkins/jjb-config: Make sure proxies are set for pkgdep jobs" 00:00:07.067 > git rev-list --no-walk bdda68d1e41499f94b336830106e36e3602574f3 # timeout=10 00:00:07.171 [Pipeline] Start of Pipeline 00:00:07.188 [Pipeline] library 00:00:07.190 Loading library shm_lib@master 00:00:07.190 Library shm_lib@master is cached. Copying from home. 00:00:07.212 [Pipeline] node 00:00:22.223 Still waiting to schedule task 00:00:22.223 ‘FCP03’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.223 ‘FCP04’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.223 ‘FCP07’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.223 ‘FCP08’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.223 ‘FCP09’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.223 ‘FCP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.223 ‘FCP11’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.223 ‘FCP12’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.223 ‘GP10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.223 ‘GP11’ is offline 00:00:22.224 ‘GP12’ is offline 00:00:22.224 ‘GP13’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.224 ‘GP14’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.224 ‘GP15’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.224 ‘GP16’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.224 ‘GP18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.224 ‘GP19’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.224 ‘GP1’ is offline 00:00:22.224 ‘GP20’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.224 ‘GP21’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.224 ‘GP22’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.224 ‘GP24’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.224 ‘GP2’ is offline 00:00:22.224 ‘GP3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.224 ‘GP4’ is offline 00:00:22.224 ‘GP5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.224 ‘GP6’ is offline 00:00:22.224 ‘GP8’ is offline 00:00:22.225 ‘GP9’ is offline 00:00:22.225 ‘ImageBuilder1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘Jenkins’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘ME1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘ME2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘ME3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘PE5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘SM10’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘SM11’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘SM1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘SM28’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘SM29’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘SM2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘SM30’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘SM31’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘SM32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘SM33’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘SM34’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘SM35’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘SM5’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘SM6’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘SM7’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘SM8’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘VM-host-PE1’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘VM-host-PE2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘VM-host-PE3’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘VM-host-PE4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘VM-host-SM18’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘VM-host-WFP25’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WCP0’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WCP2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WCP4’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP13’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP17’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP21’ is offline 00:00:22.225 ‘WFP23’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP29’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP2’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP32’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP33’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP34’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP35’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP36’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP37’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP38’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP41’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP49’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP50’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP63’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP65’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP66’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP67’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP68’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP69’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.225 ‘WFP6’ is offline 00:00:22.226 ‘WFP8’ is offline 00:00:22.226 ‘WFP9’ is offline 00:00:22.226 ‘agt-_changes_185692-7563’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.226 ‘ipxe-staging’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.226 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.226 ‘spdk-pxe-01’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:00:22.226 ‘spdk-pxe-02’ doesn’t have label ‘DiskNvme&&NetCVL’ 00:24:22.597 Running on WFP5 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:24:22.599 [Pipeline] { 00:24:22.610 [Pipeline] catchError 00:24:22.612 [Pipeline] { 00:24:22.626 [Pipeline] wrap 00:24:22.635 [Pipeline] { 00:24:22.643 [Pipeline] stage 00:24:22.645 [Pipeline] { (Prologue) 00:24:22.809 [Pipeline] sh 00:24:23.088 + logger -p user.info -t JENKINS-CI 00:24:23.108 [Pipeline] echo 00:24:23.110 Node: WFP5 00:24:23.120 [Pipeline] sh 00:24:23.420 [Pipeline] setCustomBuildProperty 00:24:23.436 [Pipeline] echo 00:24:23.438 Cleanup processes 00:24:23.444 [Pipeline] sh 00:24:23.726 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:23.726 1848599 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:23.741 [Pipeline] sh 00:24:24.024 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:24.024 ++ grep -v 'sudo pgrep' 00:24:24.024 ++ awk '{print $1}' 00:24:24.024 + sudo kill -9 00:24:24.024 + true 00:24:24.040 [Pipeline] cleanWs 00:24:24.050 [WS-CLEANUP] Deleting project workspace... 00:24:24.050 [WS-CLEANUP] Deferred wipeout is used... 00:24:24.057 [WS-CLEANUP] done 00:24:24.062 [Pipeline] setCustomBuildProperty 00:24:24.081 [Pipeline] sh 00:24:24.363 + sudo git config --global --replace-all safe.directory '*' 00:24:24.438 [Pipeline] nodesByLabel 00:24:24.440 Found a total of 2 nodes with the 'sorcerer' label 00:24:24.450 [Pipeline] httpRequest 00:24:24.454 HttpMethod: GET 00:24:24.455 URL: http://10.211.164.101/packages/jbp_bdda68d1e41499f94b336830106e36e3602574f3.tar.gz 00:24:24.460 Sending request to url: http://10.211.164.101/packages/jbp_bdda68d1e41499f94b336830106e36e3602574f3.tar.gz 00:24:24.463 Response Code: HTTP/1.1 200 OK 00:24:24.463 Success: Status code 200 is in the accepted range: 200,404 00:24:24.464 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bdda68d1e41499f94b336830106e36e3602574f3.tar.gz 00:24:24.743 [Pipeline] sh 00:24:25.025 + tar --no-same-owner -xf jbp_bdda68d1e41499f94b336830106e36e3602574f3.tar.gz 00:24:25.044 [Pipeline] httpRequest 00:24:25.049 HttpMethod: GET 00:24:25.049 URL: http://10.211.164.101/packages/spdk_5f5c5275309d65c182b25782730fba620b0c2be8.tar.gz 00:24:25.050 Sending request to url: http://10.211.164.101/packages/spdk_5f5c5275309d65c182b25782730fba620b0c2be8.tar.gz 00:24:25.052 Response Code: HTTP/1.1 200 OK 00:24:25.053 Success: Status code 200 is in the accepted range: 200,404 00:24:25.053 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5f5c5275309d65c182b25782730fba620b0c2be8.tar.gz 00:24:27.988 [Pipeline] sh 00:24:28.275 + tar --no-same-owner -xf spdk_5f5c5275309d65c182b25782730fba620b0c2be8.tar.gz 00:24:30.824 [Pipeline] sh 00:24:31.106 + git -C spdk log --oneline -n5 00:24:31.106 5f5c52753 lib/event: Set/clear log flag RPC doesn't return an error for invalid flag name. 00:24:31.106 5b4cf6db0 nvme/tcp: allocate nvme_tcp_req aligned to a cache line 00:24:31.106 c69768bd4 nvmf: add more debug logs related to cntlid and qid 00:24:31.106 7d5421b64 test/cuse: active namespaces were tested incorrectly 00:24:31.106 344c65257 nvmf/auth: add dhvlen check 00:24:31.127 [Pipeline] withCredentials 00:24:31.135 > git --version # timeout=10 00:24:31.147 > git --version # 'git version 2.39.2' 00:24:31.162 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:24:31.164 [Pipeline] { 00:24:31.176 [Pipeline] retry 00:24:31.178 [Pipeline] { 00:24:31.194 [Pipeline] sh 00:24:31.475 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:24:31.748 [Pipeline] } 00:24:31.769 [Pipeline] // retry 00:24:31.776 [Pipeline] } 00:24:31.799 [Pipeline] // withCredentials 00:24:31.812 [Pipeline] httpRequest 00:24:31.817 HttpMethod: GET 00:24:31.818 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:24:31.818 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:24:31.821 Response Code: HTTP/1.1 200 OK 00:24:31.822 Success: Status code 200 is in the accepted range: 200,404 00:24:31.823 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:24:33.049 [Pipeline] sh 00:24:33.334 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:24:34.724 [Pipeline] sh 00:24:35.007 + git -C dpdk log --oneline -n5 00:24:35.007 caf0f5d395 version: 22.11.4 00:24:35.007 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:24:35.007 dc9c799c7d vhost: fix missing spinlock unlock 00:24:35.007 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:24:35.007 6ef77f2a5e net/gve: fix RX buffer size alignment 00:24:35.019 [Pipeline] } 00:24:35.039 [Pipeline] // stage 00:24:35.049 [Pipeline] stage 00:24:35.052 [Pipeline] { (Prepare) 00:24:35.075 [Pipeline] writeFile 00:24:35.093 [Pipeline] sh 00:24:35.378 + logger -p user.info -t JENKINS-CI 00:24:35.390 [Pipeline] sh 00:24:35.672 + logger -p user.info -t JENKINS-CI 00:24:35.684 [Pipeline] sh 00:24:35.966 + cat autorun-spdk.conf 00:24:35.966 SPDK_RUN_FUNCTIONAL_TEST=1 00:24:35.966 SPDK_TEST_NVMF=1 00:24:35.966 SPDK_TEST_NVME_CLI=1 00:24:35.966 SPDK_TEST_NVMF_TRANSPORT=tcp 00:24:35.966 SPDK_TEST_NVMF_NICS=e810 00:24:35.966 SPDK_TEST_VFIOUSER=1 00:24:35.966 SPDK_RUN_UBSAN=1 00:24:35.966 NET_TYPE=phy 00:24:35.966 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:24:35.966 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:24:35.973 RUN_NIGHTLY=1 00:24:35.978 [Pipeline] readFile 00:24:36.005 [Pipeline] withEnv 00:24:36.007 [Pipeline] { 00:24:36.019 [Pipeline] sh 00:24:36.297 + set -ex 00:24:36.297 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:24:36.297 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:24:36.297 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:24:36.297 ++ SPDK_TEST_NVMF=1 00:24:36.297 ++ SPDK_TEST_NVME_CLI=1 00:24:36.297 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:24:36.297 ++ SPDK_TEST_NVMF_NICS=e810 00:24:36.297 ++ SPDK_TEST_VFIOUSER=1 00:24:36.297 ++ SPDK_RUN_UBSAN=1 00:24:36.297 ++ NET_TYPE=phy 00:24:36.297 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:24:36.297 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:24:36.297 ++ RUN_NIGHTLY=1 00:24:36.297 + case $SPDK_TEST_NVMF_NICS in 00:24:36.297 + DRIVERS=ice 00:24:36.297 + [[ tcp == \r\d\m\a ]] 00:24:36.297 + [[ -n ice ]] 00:24:36.297 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:24:36.297 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:24:42.865 rmmod: ERROR: Module irdma is not currently loaded 00:24:42.865 rmmod: ERROR: Module i40iw is not currently loaded 00:24:42.865 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:24:42.865 + true 00:24:42.865 + for D in $DRIVERS 00:24:42.865 + sudo modprobe ice 00:24:42.865 + exit 0 00:24:42.875 [Pipeline] } 00:24:42.894 [Pipeline] // withEnv 00:24:42.901 [Pipeline] } 00:24:42.919 [Pipeline] // stage 00:24:42.929 [Pipeline] catchError 00:24:42.930 [Pipeline] { 00:24:42.944 [Pipeline] timeout 00:24:42.944 Timeout set to expire in 50 min 00:24:42.945 [Pipeline] { 00:24:42.959 [Pipeline] stage 00:24:42.960 [Pipeline] { (Tests) 00:24:42.973 [Pipeline] sh 00:24:43.253 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:24:43.253 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:24:43.253 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:24:43.253 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:24:43.253 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:43.253 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:24:43.253 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:24:43.253 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:24:43.253 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:24:43.253 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:24:43.253 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:24:43.253 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:24:43.253 + source /etc/os-release 00:24:43.253 ++ NAME='Fedora Linux' 00:24:43.253 ++ VERSION='38 (Cloud Edition)' 00:24:43.254 ++ ID=fedora 00:24:43.254 ++ VERSION_ID=38 00:24:43.254 ++ VERSION_CODENAME= 00:24:43.254 ++ PLATFORM_ID=platform:f38 00:24:43.254 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:24:43.254 ++ ANSI_COLOR='0;38;2;60;110;180' 00:24:43.254 ++ LOGO=fedora-logo-icon 00:24:43.254 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:24:43.254 ++ HOME_URL=https://fedoraproject.org/ 00:24:43.254 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:24:43.254 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:24:43.254 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:24:43.254 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:24:43.254 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:24:43.254 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:24:43.254 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:24:43.254 ++ SUPPORT_END=2024-05-14 00:24:43.254 ++ VARIANT='Cloud Edition' 00:24:43.254 ++ VARIANT_ID=cloud 00:24:43.254 + uname -a 00:24:43.254 Linux spdk-wfp-05 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:24:43.254 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:24:45.823 Hugepages 00:24:45.823 node hugesize free / total 00:24:45.823 node0 1048576kB 0 / 0 00:24:45.823 node0 2048kB 0 / 0 00:24:45.823 node1 1048576kB 0 / 0 00:24:45.823 node1 2048kB 0 / 0 00:24:45.823 00:24:45.823 Type BDF Vendor Device NUMA Driver Device Block devices 00:24:45.823 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:24:45.823 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:24:45.823 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:24:45.823 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:24:45.823 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:24:45.823 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:24:45.823 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:24:45.823 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:24:45.823 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:24:45.823 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:24:45.823 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:24:45.823 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:24:45.823 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:24:45.823 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:24:45.823 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:24:45.823 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:24:45.823 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:24:46.083 + rm -f /tmp/spdk-ld-path 00:24:46.083 + source autorun-spdk.conf 00:24:46.083 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:24:46.083 ++ SPDK_TEST_NVMF=1 00:24:46.083 ++ SPDK_TEST_NVME_CLI=1 00:24:46.083 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:24:46.083 ++ SPDK_TEST_NVMF_NICS=e810 00:24:46.083 ++ SPDK_TEST_VFIOUSER=1 00:24:46.083 ++ SPDK_RUN_UBSAN=1 00:24:46.083 ++ NET_TYPE=phy 00:24:46.083 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:24:46.083 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:24:46.083 ++ RUN_NIGHTLY=1 00:24:46.083 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:24:46.083 + [[ -n '' ]] 00:24:46.083 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:46.083 + for M in /var/spdk/build-*-manifest.txt 00:24:46.083 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:24:46.083 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:24:46.083 + for M in /var/spdk/build-*-manifest.txt 00:24:46.083 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:24:46.083 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:24:46.083 ++ uname 00:24:46.083 + [[ Linux == \L\i\n\u\x ]] 00:24:46.083 + sudo dmesg -T 00:24:46.083 + sudo dmesg --clear 00:24:46.083 + dmesg_pid=1850042 00:24:46.083 + [[ Fedora Linux == FreeBSD ]] 00:24:46.083 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:46.083 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:46.083 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:24:46.083 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:24:46.083 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:24:46.083 + [[ -x /usr/src/fio-static/fio ]] 00:24:46.083 + export FIO_BIN=/usr/src/fio-static/fio 00:24:46.083 + FIO_BIN=/usr/src/fio-static/fio 00:24:46.083 + sudo dmesg -Tw 00:24:46.083 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:24:46.083 + [[ ! -v VFIO_QEMU_BIN ]] 00:24:46.083 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:24:46.083 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:46.083 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:46.083 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:24:46.083 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:46.083 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:46.083 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:24:46.083 Test configuration: 00:24:46.083 SPDK_RUN_FUNCTIONAL_TEST=1 00:24:46.083 SPDK_TEST_NVMF=1 00:24:46.083 SPDK_TEST_NVME_CLI=1 00:24:46.083 SPDK_TEST_NVMF_TRANSPORT=tcp 00:24:46.083 SPDK_TEST_NVMF_NICS=e810 00:24:46.083 SPDK_TEST_VFIOUSER=1 00:24:46.083 SPDK_RUN_UBSAN=1 00:24:46.083 NET_TYPE=phy 00:24:46.083 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:24:46.083 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:24:46.083 RUN_NIGHTLY=1 03:21:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:46.083 03:21:27 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:24:46.083 03:21:27 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.083 03:21:27 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.083 03:21:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.083 03:21:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.083 03:21:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.083 03:21:27 -- paths/export.sh@5 -- $ export PATH 00:24:46.083 03:21:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.083 03:21:27 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:24:46.083 03:21:27 -- common/autobuild_common.sh@437 -- $ date +%s 00:24:46.083 03:21:27 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718068887.XXXXXX 00:24:46.083 03:21:27 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718068887.qNVqkb 00:24:46.083 03:21:27 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:24:46.084 03:21:27 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:24:46.084 03:21:27 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:24:46.084 03:21:27 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:24:46.084 03:21:27 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:24:46.084 03:21:27 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:24:46.084 03:21:27 -- common/autobuild_common.sh@453 -- $ get_config_params 00:24:46.084 03:21:27 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:24:46.084 03:21:27 -- common/autotest_common.sh@10 -- $ set +x 00:24:46.084 03:21:27 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:24:46.084 03:21:27 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:24:46.084 03:21:27 -- pm/common@17 -- $ local monitor 00:24:46.084 03:21:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:46.084 03:21:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:46.084 03:21:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:46.084 03:21:27 -- pm/common@21 -- $ date +%s 00:24:46.084 03:21:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:46.084 03:21:27 -- pm/common@21 -- $ date +%s 00:24:46.084 03:21:27 -- pm/common@25 -- $ sleep 1 00:24:46.084 03:21:27 -- pm/common@21 -- $ date +%s 00:24:46.084 03:21:27 -- pm/common@21 -- $ date +%s 00:24:46.084 03:21:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718068887 00:24:46.084 03:21:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718068887 00:24:46.084 03:21:27 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718068887 00:24:46.084 03:21:27 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718068887 00:24:46.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718068887_collect-vmstat.pm.log 00:24:46.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718068887_collect-cpu-load.pm.log 00:24:46.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718068887_collect-cpu-temp.pm.log 00:24:46.343 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718068887_collect-bmc-pm.bmc.pm.log 00:24:47.280 03:21:28 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:24:47.280 03:21:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:24:47.280 03:21:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:24:47.280 03:21:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:47.280 03:21:28 -- spdk/autobuild.sh@16 -- $ date -u 00:24:47.280 Tue Jun 11 01:21:28 AM UTC 2024 00:24:47.280 03:21:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:24:47.280 v24.09-pre-59-g5f5c52753 00:24:47.280 03:21:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:24:47.280 03:21:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:24:47.280 03:21:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:24:47.280 03:21:28 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:24:47.280 03:21:28 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:24:47.280 03:21:28 -- common/autotest_common.sh@10 -- $ set +x 00:24:47.280 ************************************ 00:24:47.280 START TEST ubsan 00:24:47.280 ************************************ 00:24:47.280 03:21:28 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:24:47.280 using ubsan 00:24:47.280 00:24:47.280 real 0m0.000s 00:24:47.280 user 0m0.000s 00:24:47.280 sys 0m0.000s 00:24:47.280 03:21:28 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:24:47.280 03:21:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:24:47.280 ************************************ 00:24:47.280 END TEST ubsan 00:24:47.280 ************************************ 00:24:47.280 03:21:28 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:24:47.280 03:21:28 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:24:47.280 03:21:28 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:24:47.280 03:21:28 -- common/autotest_common.sh@1100 -- $ '[' 2 -le 1 ']' 00:24:47.280 03:21:28 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:24:47.280 03:21:28 -- common/autotest_common.sh@10 -- $ set +x 00:24:47.280 ************************************ 00:24:47.280 START TEST build_native_dpdk 00:24:47.280 ************************************ 00:24:47.280 03:21:28 build_native_dpdk -- common/autotest_common.sh@1124 -- $ _build_native_dpdk 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:24:47.281 caf0f5d395 version: 22.11.4 00:24:47.281 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:24:47.281 dc9c799c7d vhost: fix missing spinlock unlock 00:24:47.281 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:24:47.281 6ef77f2a5e net/gve: fix RX buffer size alignment 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:24:47.281 03:21:28 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:24:47.281 patching file config/rte_config.h 00:24:47.281 Hunk #1 succeeded at 60 (offset 1 line). 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:24:47.281 03:21:28 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:24:51.473 The Meson build system 00:24:51.473 Version: 1.3.1 00:24:51.473 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:24:51.473 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:24:51.473 Build type: native build 00:24:51.473 Program cat found: YES (/usr/bin/cat) 00:24:51.473 Project name: DPDK 00:24:51.473 Project version: 22.11.4 00:24:51.473 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:24:51.473 C linker for the host machine: gcc ld.bfd 2.39-16 00:24:51.473 Host machine cpu family: x86_64 00:24:51.473 Host machine cpu: x86_64 00:24:51.473 Message: ## Building in Developer Mode ## 00:24:51.473 Program pkg-config found: YES (/usr/bin/pkg-config) 00:24:51.473 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:24:51.473 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:24:51.473 Program objdump found: YES (/usr/bin/objdump) 00:24:51.473 Program python3 found: YES (/usr/bin/python3) 00:24:51.473 Program cat found: YES (/usr/bin/cat) 00:24:51.473 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:24:51.473 Checking for size of "void *" : 8 00:24:51.473 Checking for size of "void *" : 8 (cached) 00:24:51.473 Library m found: YES 00:24:51.473 Library numa found: YES 00:24:51.473 Has header "numaif.h" : YES 00:24:51.473 Library fdt found: NO 00:24:51.473 Library execinfo found: NO 00:24:51.473 Has header "execinfo.h" : YES 00:24:51.473 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:24:51.473 Run-time dependency libarchive found: NO (tried pkgconfig) 00:24:51.473 Run-time dependency libbsd found: NO (tried pkgconfig) 00:24:51.473 Run-time dependency jansson found: NO (tried pkgconfig) 00:24:51.473 Run-time dependency openssl found: YES 3.0.9 00:24:51.473 Run-time dependency libpcap found: YES 1.10.4 00:24:51.473 Has header "pcap.h" with dependency libpcap: YES 00:24:51.473 Compiler for C supports arguments -Wcast-qual: YES 00:24:51.473 Compiler for C supports arguments -Wdeprecated: YES 00:24:51.473 Compiler for C supports arguments -Wformat: YES 00:24:51.473 Compiler for C supports arguments -Wformat-nonliteral: NO 00:24:51.473 Compiler for C supports arguments -Wformat-security: NO 00:24:51.473 Compiler for C supports arguments -Wmissing-declarations: YES 00:24:51.473 Compiler for C supports arguments -Wmissing-prototypes: YES 00:24:51.473 Compiler for C supports arguments -Wnested-externs: YES 00:24:51.473 Compiler for C supports arguments -Wold-style-definition: YES 00:24:51.473 Compiler for C supports arguments -Wpointer-arith: YES 00:24:51.473 Compiler for C supports arguments -Wsign-compare: YES 00:24:51.473 Compiler for C supports arguments -Wstrict-prototypes: YES 00:24:51.473 Compiler for C supports arguments -Wundef: YES 00:24:51.473 Compiler for C supports arguments -Wwrite-strings: YES 00:24:51.473 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:24:51.473 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:24:51.473 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:24:51.473 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:24:51.473 Compiler for C supports arguments -mavx512f: YES 00:24:51.473 Checking if "AVX512 checking" compiles: YES 00:24:51.473 Fetching value of define "__SSE4_2__" : 1 00:24:51.473 Fetching value of define "__AES__" : 1 00:24:51.473 Fetching value of define "__AVX__" : 1 00:24:51.473 Fetching value of define "__AVX2__" : 1 00:24:51.473 Fetching value of define "__AVX512BW__" : 1 00:24:51.473 Fetching value of define "__AVX512CD__" : 1 00:24:51.473 Fetching value of define "__AVX512DQ__" : 1 00:24:51.473 Fetching value of define "__AVX512F__" : 1 00:24:51.473 Fetching value of define "__AVX512VL__" : 1 00:24:51.473 Fetching value of define "__PCLMUL__" : 1 00:24:51.473 Fetching value of define "__RDRND__" : 1 00:24:51.473 Fetching value of define "__RDSEED__" : 1 00:24:51.473 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:24:51.473 Compiler for C supports arguments -Wno-format-truncation: YES 00:24:51.473 Message: lib/kvargs: Defining dependency "kvargs" 00:24:51.473 Message: lib/telemetry: Defining dependency "telemetry" 00:24:51.473 Checking for function "getentropy" : YES 00:24:51.473 Message: lib/eal: Defining dependency "eal" 00:24:51.473 Message: lib/ring: Defining dependency "ring" 00:24:51.473 Message: lib/rcu: Defining dependency "rcu" 00:24:51.473 Message: lib/mempool: Defining dependency "mempool" 00:24:51.473 Message: lib/mbuf: Defining dependency "mbuf" 00:24:51.473 Fetching value of define "__PCLMUL__" : 1 (cached) 00:24:51.473 Fetching value of define "__AVX512F__" : 1 (cached) 00:24:51.473 Fetching value of define "__AVX512BW__" : 1 (cached) 00:24:51.473 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:24:51.473 Fetching value of define "__AVX512VL__" : 1 (cached) 00:24:51.473 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:24:51.473 Compiler for C supports arguments -mpclmul: YES 00:24:51.473 Compiler for C supports arguments -maes: YES 00:24:51.473 Compiler for C supports arguments -mavx512f: YES (cached) 00:24:51.473 Compiler for C supports arguments -mavx512bw: YES 00:24:51.473 Compiler for C supports arguments -mavx512dq: YES 00:24:51.473 Compiler for C supports arguments -mavx512vl: YES 00:24:51.473 Compiler for C supports arguments -mvpclmulqdq: YES 00:24:51.473 Compiler for C supports arguments -mavx2: YES 00:24:51.473 Compiler for C supports arguments -mavx: YES 00:24:51.473 Message: lib/net: Defining dependency "net" 00:24:51.473 Message: lib/meter: Defining dependency "meter" 00:24:51.473 Message: lib/ethdev: Defining dependency "ethdev" 00:24:51.473 Message: lib/pci: Defining dependency "pci" 00:24:51.473 Message: lib/cmdline: Defining dependency "cmdline" 00:24:51.473 Message: lib/metrics: Defining dependency "metrics" 00:24:51.473 Message: lib/hash: Defining dependency "hash" 00:24:51.473 Message: lib/timer: Defining dependency "timer" 00:24:51.473 Fetching value of define "__AVX2__" : 1 (cached) 00:24:51.473 Fetching value of define "__AVX512F__" : 1 (cached) 00:24:51.473 Fetching value of define "__AVX512VL__" : 1 (cached) 00:24:51.473 Fetching value of define "__AVX512CD__" : 1 (cached) 00:24:51.473 Fetching value of define "__AVX512BW__" : 1 (cached) 00:24:51.473 Message: lib/acl: Defining dependency "acl" 00:24:51.473 Message: lib/bbdev: Defining dependency "bbdev" 00:24:51.473 Message: lib/bitratestats: Defining dependency "bitratestats" 00:24:51.473 Run-time dependency libelf found: YES 0.190 00:24:51.473 Message: lib/bpf: Defining dependency "bpf" 00:24:51.473 Message: lib/cfgfile: Defining dependency "cfgfile" 00:24:51.473 Message: lib/compressdev: Defining dependency "compressdev" 00:24:51.473 Message: lib/cryptodev: Defining dependency "cryptodev" 00:24:51.473 Message: lib/distributor: Defining dependency "distributor" 00:24:51.473 Message: lib/efd: Defining dependency "efd" 00:24:51.473 Message: lib/eventdev: Defining dependency "eventdev" 00:24:51.473 Message: lib/gpudev: Defining dependency "gpudev" 00:24:51.473 Message: lib/gro: Defining dependency "gro" 00:24:51.473 Message: lib/gso: Defining dependency "gso" 00:24:51.473 Message: lib/ip_frag: Defining dependency "ip_frag" 00:24:51.473 Message: lib/jobstats: Defining dependency "jobstats" 00:24:51.473 Message: lib/latencystats: Defining dependency "latencystats" 00:24:51.473 Message: lib/lpm: Defining dependency "lpm" 00:24:51.473 Fetching value of define "__AVX512F__" : 1 (cached) 00:24:51.473 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:24:51.473 Fetching value of define "__AVX512IFMA__" : (undefined) 00:24:51.473 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:24:51.473 Message: lib/member: Defining dependency "member" 00:24:51.473 Message: lib/pcapng: Defining dependency "pcapng" 00:24:51.473 Compiler for C supports arguments -Wno-cast-qual: YES 00:24:51.473 Message: lib/power: Defining dependency "power" 00:24:51.473 Message: lib/rawdev: Defining dependency "rawdev" 00:24:51.473 Message: lib/regexdev: Defining dependency "regexdev" 00:24:51.473 Message: lib/dmadev: Defining dependency "dmadev" 00:24:51.473 Message: lib/rib: Defining dependency "rib" 00:24:51.473 Message: lib/reorder: Defining dependency "reorder" 00:24:51.473 Message: lib/sched: Defining dependency "sched" 00:24:51.473 Message: lib/security: Defining dependency "security" 00:24:51.474 Message: lib/stack: Defining dependency "stack" 00:24:51.474 Has header "linux/userfaultfd.h" : YES 00:24:51.474 Message: lib/vhost: Defining dependency "vhost" 00:24:51.474 Message: lib/ipsec: Defining dependency "ipsec" 00:24:51.474 Fetching value of define "__AVX512F__" : 1 (cached) 00:24:51.474 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:24:51.474 Fetching value of define "__AVX512BW__" : 1 (cached) 00:24:51.474 Message: lib/fib: Defining dependency "fib" 00:24:51.474 Message: lib/port: Defining dependency "port" 00:24:51.474 Message: lib/pdump: Defining dependency "pdump" 00:24:51.474 Message: lib/table: Defining dependency "table" 00:24:51.474 Message: lib/pipeline: Defining dependency "pipeline" 00:24:51.474 Message: lib/graph: Defining dependency "graph" 00:24:51.474 Message: lib/node: Defining dependency "node" 00:24:51.474 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:24:51.474 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:24:51.474 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:24:51.474 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:24:51.474 Compiler for C supports arguments -Wno-sign-compare: YES 00:24:51.474 Compiler for C supports arguments -Wno-unused-value: YES 00:24:51.474 Compiler for C supports arguments -Wno-format: YES 00:24:51.474 Compiler for C supports arguments -Wno-format-security: YES 00:24:51.474 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:24:52.411 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:24:52.411 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:24:52.411 Compiler for C supports arguments -Wno-unused-parameter: YES 00:24:52.411 Fetching value of define "__AVX2__" : 1 (cached) 00:24:52.411 Fetching value of define "__AVX512F__" : 1 (cached) 00:24:52.411 Fetching value of define "__AVX512BW__" : 1 (cached) 00:24:52.411 Compiler for C supports arguments -mavx512f: YES (cached) 00:24:52.411 Compiler for C supports arguments -mavx512bw: YES (cached) 00:24:52.411 Compiler for C supports arguments -march=skylake-avx512: YES 00:24:52.411 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:24:52.411 Program doxygen found: YES (/usr/bin/doxygen) 00:24:52.411 Configuring doxy-api.conf using configuration 00:24:52.411 Program sphinx-build found: NO 00:24:52.411 Configuring rte_build_config.h using configuration 00:24:52.411 Message: 00:24:52.411 ================= 00:24:52.411 Applications Enabled 00:24:52.411 ================= 00:24:52.411 00:24:52.411 apps: 00:24:52.411 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:24:52.411 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:24:52.411 test-security-perf, 00:24:52.411 00:24:52.411 Message: 00:24:52.411 ================= 00:24:52.411 Libraries Enabled 00:24:52.411 ================= 00:24:52.411 00:24:52.411 libs: 00:24:52.411 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:24:52.412 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:24:52.412 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:24:52.412 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:24:52.412 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:24:52.412 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:24:52.412 table, pipeline, graph, node, 00:24:52.412 00:24:52.412 Message: 00:24:52.412 =============== 00:24:52.412 Drivers Enabled 00:24:52.412 =============== 00:24:52.412 00:24:52.412 common: 00:24:52.412 00:24:52.412 bus: 00:24:52.412 pci, vdev, 00:24:52.412 mempool: 00:24:52.412 ring, 00:24:52.412 dma: 00:24:52.412 00:24:52.412 net: 00:24:52.412 i40e, 00:24:52.412 raw: 00:24:52.412 00:24:52.412 crypto: 00:24:52.412 00:24:52.412 compress: 00:24:52.412 00:24:52.412 regex: 00:24:52.412 00:24:52.412 vdpa: 00:24:52.412 00:24:52.412 event: 00:24:52.412 00:24:52.412 baseband: 00:24:52.412 00:24:52.412 gpu: 00:24:52.412 00:24:52.412 00:24:52.412 Message: 00:24:52.412 ================= 00:24:52.412 Content Skipped 00:24:52.412 ================= 00:24:52.412 00:24:52.412 apps: 00:24:52.412 00:24:52.412 libs: 00:24:52.412 kni: explicitly disabled via build config (deprecated lib) 00:24:52.412 flow_classify: explicitly disabled via build config (deprecated lib) 00:24:52.412 00:24:52.412 drivers: 00:24:52.412 common/cpt: not in enabled drivers build config 00:24:52.412 common/dpaax: not in enabled drivers build config 00:24:52.412 common/iavf: not in enabled drivers build config 00:24:52.412 common/idpf: not in enabled drivers build config 00:24:52.412 common/mvep: not in enabled drivers build config 00:24:52.412 common/octeontx: not in enabled drivers build config 00:24:52.412 bus/auxiliary: not in enabled drivers build config 00:24:52.412 bus/dpaa: not in enabled drivers build config 00:24:52.412 bus/fslmc: not in enabled drivers build config 00:24:52.412 bus/ifpga: not in enabled drivers build config 00:24:52.412 bus/vmbus: not in enabled drivers build config 00:24:52.412 common/cnxk: not in enabled drivers build config 00:24:52.412 common/mlx5: not in enabled drivers build config 00:24:52.412 common/qat: not in enabled drivers build config 00:24:52.412 common/sfc_efx: not in enabled drivers build config 00:24:52.412 mempool/bucket: not in enabled drivers build config 00:24:52.412 mempool/cnxk: not in enabled drivers build config 00:24:52.412 mempool/dpaa: not in enabled drivers build config 00:24:52.412 mempool/dpaa2: not in enabled drivers build config 00:24:52.412 mempool/octeontx: not in enabled drivers build config 00:24:52.412 mempool/stack: not in enabled drivers build config 00:24:52.412 dma/cnxk: not in enabled drivers build config 00:24:52.412 dma/dpaa: not in enabled drivers build config 00:24:52.412 dma/dpaa2: not in enabled drivers build config 00:24:52.412 dma/hisilicon: not in enabled drivers build config 00:24:52.412 dma/idxd: not in enabled drivers build config 00:24:52.412 dma/ioat: not in enabled drivers build config 00:24:52.412 dma/skeleton: not in enabled drivers build config 00:24:52.412 net/af_packet: not in enabled drivers build config 00:24:52.412 net/af_xdp: not in enabled drivers build config 00:24:52.412 net/ark: not in enabled drivers build config 00:24:52.412 net/atlantic: not in enabled drivers build config 00:24:52.412 net/avp: not in enabled drivers build config 00:24:52.412 net/axgbe: not in enabled drivers build config 00:24:52.412 net/bnx2x: not in enabled drivers build config 00:24:52.412 net/bnxt: not in enabled drivers build config 00:24:52.412 net/bonding: not in enabled drivers build config 00:24:52.412 net/cnxk: not in enabled drivers build config 00:24:52.412 net/cxgbe: not in enabled drivers build config 00:24:52.412 net/dpaa: not in enabled drivers build config 00:24:52.412 net/dpaa2: not in enabled drivers build config 00:24:52.412 net/e1000: not in enabled drivers build config 00:24:52.412 net/ena: not in enabled drivers build config 00:24:52.412 net/enetc: not in enabled drivers build config 00:24:52.412 net/enetfec: not in enabled drivers build config 00:24:52.412 net/enic: not in enabled drivers build config 00:24:52.412 net/failsafe: not in enabled drivers build config 00:24:52.412 net/fm10k: not in enabled drivers build config 00:24:52.412 net/gve: not in enabled drivers build config 00:24:52.412 net/hinic: not in enabled drivers build config 00:24:52.412 net/hns3: not in enabled drivers build config 00:24:52.412 net/iavf: not in enabled drivers build config 00:24:52.412 net/ice: not in enabled drivers build config 00:24:52.412 net/idpf: not in enabled drivers build config 00:24:52.412 net/igc: not in enabled drivers build config 00:24:52.412 net/ionic: not in enabled drivers build config 00:24:52.412 net/ipn3ke: not in enabled drivers build config 00:24:52.412 net/ixgbe: not in enabled drivers build config 00:24:52.412 net/kni: not in enabled drivers build config 00:24:52.412 net/liquidio: not in enabled drivers build config 00:24:52.412 net/mana: not in enabled drivers build config 00:24:52.412 net/memif: not in enabled drivers build config 00:24:52.412 net/mlx4: not in enabled drivers build config 00:24:52.412 net/mlx5: not in enabled drivers build config 00:24:52.412 net/mvneta: not in enabled drivers build config 00:24:52.412 net/mvpp2: not in enabled drivers build config 00:24:52.412 net/netvsc: not in enabled drivers build config 00:24:52.412 net/nfb: not in enabled drivers build config 00:24:52.412 net/nfp: not in enabled drivers build config 00:24:52.412 net/ngbe: not in enabled drivers build config 00:24:52.412 net/null: not in enabled drivers build config 00:24:52.412 net/octeontx: not in enabled drivers build config 00:24:52.412 net/octeon_ep: not in enabled drivers build config 00:24:52.412 net/pcap: not in enabled drivers build config 00:24:52.412 net/pfe: not in enabled drivers build config 00:24:52.412 net/qede: not in enabled drivers build config 00:24:52.412 net/ring: not in enabled drivers build config 00:24:52.412 net/sfc: not in enabled drivers build config 00:24:52.412 net/softnic: not in enabled drivers build config 00:24:52.412 net/tap: not in enabled drivers build config 00:24:52.412 net/thunderx: not in enabled drivers build config 00:24:52.412 net/txgbe: not in enabled drivers build config 00:24:52.412 net/vdev_netvsc: not in enabled drivers build config 00:24:52.412 net/vhost: not in enabled drivers build config 00:24:52.412 net/virtio: not in enabled drivers build config 00:24:52.412 net/vmxnet3: not in enabled drivers build config 00:24:52.412 raw/cnxk_bphy: not in enabled drivers build config 00:24:52.412 raw/cnxk_gpio: not in enabled drivers build config 00:24:52.412 raw/dpaa2_cmdif: not in enabled drivers build config 00:24:52.412 raw/ifpga: not in enabled drivers build config 00:24:52.412 raw/ntb: not in enabled drivers build config 00:24:52.412 raw/skeleton: not in enabled drivers build config 00:24:52.412 crypto/armv8: not in enabled drivers build config 00:24:52.412 crypto/bcmfs: not in enabled drivers build config 00:24:52.412 crypto/caam_jr: not in enabled drivers build config 00:24:52.412 crypto/ccp: not in enabled drivers build config 00:24:52.412 crypto/cnxk: not in enabled drivers build config 00:24:52.412 crypto/dpaa_sec: not in enabled drivers build config 00:24:52.412 crypto/dpaa2_sec: not in enabled drivers build config 00:24:52.412 crypto/ipsec_mb: not in enabled drivers build config 00:24:52.412 crypto/mlx5: not in enabled drivers build config 00:24:52.412 crypto/mvsam: not in enabled drivers build config 00:24:52.412 crypto/nitrox: not in enabled drivers build config 00:24:52.412 crypto/null: not in enabled drivers build config 00:24:52.412 crypto/octeontx: not in enabled drivers build config 00:24:52.412 crypto/openssl: not in enabled drivers build config 00:24:52.412 crypto/scheduler: not in enabled drivers build config 00:24:52.412 crypto/uadk: not in enabled drivers build config 00:24:52.412 crypto/virtio: not in enabled drivers build config 00:24:52.412 compress/isal: not in enabled drivers build config 00:24:52.412 compress/mlx5: not in enabled drivers build config 00:24:52.412 compress/octeontx: not in enabled drivers build config 00:24:52.412 compress/zlib: not in enabled drivers build config 00:24:52.412 regex/mlx5: not in enabled drivers build config 00:24:52.412 regex/cn9k: not in enabled drivers build config 00:24:52.412 vdpa/ifc: not in enabled drivers build config 00:24:52.412 vdpa/mlx5: not in enabled drivers build config 00:24:52.412 vdpa/sfc: not in enabled drivers build config 00:24:52.412 event/cnxk: not in enabled drivers build config 00:24:52.412 event/dlb2: not in enabled drivers build config 00:24:52.412 event/dpaa: not in enabled drivers build config 00:24:52.412 event/dpaa2: not in enabled drivers build config 00:24:52.412 event/dsw: not in enabled drivers build config 00:24:52.412 event/opdl: not in enabled drivers build config 00:24:52.412 event/skeleton: not in enabled drivers build config 00:24:52.412 event/sw: not in enabled drivers build config 00:24:52.412 event/octeontx: not in enabled drivers build config 00:24:52.412 baseband/acc: not in enabled drivers build config 00:24:52.413 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:24:52.413 baseband/fpga_lte_fec: not in enabled drivers build config 00:24:52.413 baseband/la12xx: not in enabled drivers build config 00:24:52.413 baseband/null: not in enabled drivers build config 00:24:52.413 baseband/turbo_sw: not in enabled drivers build config 00:24:52.413 gpu/cuda: not in enabled drivers build config 00:24:52.413 00:24:52.413 00:24:52.413 Build targets in project: 311 00:24:52.413 00:24:52.413 DPDK 22.11.4 00:24:52.413 00:24:52.413 User defined options 00:24:52.413 libdir : lib 00:24:52.413 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:24:52.413 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:24:52.413 c_link_args : 00:24:52.413 enable_docs : false 00:24:52.413 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:24:52.413 enable_kmods : false 00:24:52.413 machine : native 00:24:52.413 tests : false 00:24:52.413 00:24:52.413 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:24:52.413 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:24:52.679 03:21:33 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:24:52.679 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:24:52.679 [1/740] Generating lib/rte_telemetry_def with a custom command 00:24:52.679 [2/740] Generating lib/rte_telemetry_mingw with a custom command 00:24:52.679 [3/740] Generating lib/rte_kvargs_def with a custom command 00:24:52.679 [4/740] Generating lib/rte_kvargs_mingw with a custom command 00:24:52.680 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:24:52.680 [6/740] Generating lib/rte_rcu_def with a custom command 00:24:52.680 [7/740] Generating lib/rte_eal_def with a custom command 00:24:52.680 [8/740] Generating lib/rte_eal_mingw with a custom command 00:24:52.680 [9/740] Generating lib/rte_mempool_def with a custom command 00:24:52.680 [10/740] Generating lib/rte_ring_def with a custom command 00:24:52.680 [11/740] Generating lib/rte_rcu_mingw with a custom command 00:24:52.680 [12/740] Generating lib/rte_mempool_mingw with a custom command 00:24:52.680 [13/740] Generating lib/rte_mbuf_mingw with a custom command 00:24:52.680 [14/740] Generating lib/rte_mbuf_def with a custom command 00:24:52.680 [15/740] Generating lib/rte_ring_mingw with a custom command 00:24:52.680 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:24:52.680 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:24:52.680 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:24:52.680 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:24:52.680 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:24:52.680 [21/740] Generating lib/rte_net_mingw with a custom command 00:24:52.680 [22/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:24:52.680 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:24:52.680 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:24:52.680 [25/740] Generating lib/rte_net_def with a custom command 00:24:52.680 [26/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:24:52.680 [27/740] Generating lib/rte_meter_def with a custom command 00:24:52.680 [28/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:24:52.680 [29/740] Generating lib/rte_meter_mingw with a custom command 00:24:52.943 [30/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:24:52.943 [31/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:24:52.943 [32/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:24:52.943 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:24:52.943 [34/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:24:52.943 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:24:52.943 [36/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:24:52.943 [37/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:24:52.943 [38/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:24:52.943 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:24:52.943 [40/740] Generating lib/rte_ethdev_def with a custom command 00:24:52.943 [41/740] Generating lib/rte_ethdev_mingw with a custom command 00:24:52.943 [42/740] Linking static target lib/librte_kvargs.a 00:24:52.943 [43/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:24:52.943 [44/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:24:52.943 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:24:52.943 [46/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:24:52.943 [47/740] Generating lib/rte_pci_def with a custom command 00:24:52.943 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:24:52.943 [49/740] Generating lib/rte_pci_mingw with a custom command 00:24:52.943 [50/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:24:52.943 [51/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:24:52.943 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:24:52.943 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:24:52.943 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:24:52.943 [55/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:24:52.943 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:24:52.943 [57/740] Generating lib/rte_cmdline_def with a custom command 00:24:52.943 [58/740] Generating lib/rte_cmdline_mingw with a custom command 00:24:52.943 [59/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:24:52.943 [60/740] Generating lib/rte_metrics_mingw with a custom command 00:24:52.943 [61/740] Generating lib/rte_metrics_def with a custom command 00:24:52.943 [62/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:24:52.943 [63/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:24:52.943 [64/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:24:52.943 [65/740] Generating lib/rte_hash_def with a custom command 00:24:52.943 [66/740] Linking static target lib/librte_pci.a 00:24:52.943 [67/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:24:52.943 [68/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:24:52.943 [69/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:24:52.943 [70/740] Generating lib/rte_hash_mingw with a custom command 00:24:52.943 [71/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:24:52.943 [72/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:24:52.943 [73/740] Generating lib/rte_timer_def with a custom command 00:24:52.943 [74/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:24:52.943 [75/740] Generating lib/rte_timer_mingw with a custom command 00:24:52.943 [76/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:24:52.943 [77/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:24:52.943 [78/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:24:52.943 [79/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:24:52.943 [80/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:24:52.943 [81/740] Linking static target lib/librte_meter.a 00:24:52.943 [82/740] Linking static target lib/librte_ring.a 00:24:52.943 [83/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:24:52.943 [84/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:24:52.943 [85/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:24:52.943 [86/740] Generating lib/rte_bbdev_def with a custom command 00:24:52.943 [87/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:24:52.943 [88/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:24:52.943 [89/740] Generating lib/rte_bbdev_mingw with a custom command 00:24:52.943 [90/740] Generating lib/rte_acl_def with a custom command 00:24:52.943 [91/740] Generating lib/rte_bitratestats_def with a custom command 00:24:52.943 [92/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:24:52.943 [93/740] Generating lib/rte_acl_mingw with a custom command 00:24:52.943 [94/740] Generating lib/rte_bitratestats_mingw with a custom command 00:24:52.943 [95/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:24:52.943 [96/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:24:52.943 [97/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:24:52.943 [98/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:24:52.943 [99/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:24:52.943 [100/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:24:52.943 [101/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:24:52.943 [102/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:24:52.943 [103/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:24:53.210 [104/740] Generating lib/rte_bpf_def with a custom command 00:24:53.210 [105/740] Generating lib/rte_bpf_mingw with a custom command 00:24:53.210 [106/740] Generating lib/rte_cfgfile_mingw with a custom command 00:24:53.210 [107/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:24:53.210 [108/740] Generating lib/rte_cfgfile_def with a custom command 00:24:53.210 [109/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:24:53.210 [110/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:24:53.210 [111/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:24:53.210 [112/740] Generating lib/rte_compressdev_def with a custom command 00:24:53.210 [113/740] Generating lib/rte_compressdev_mingw with a custom command 00:24:53.210 [114/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:24:53.210 [115/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:24:53.210 [116/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:24:53.210 [117/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:24:53.210 [118/740] Generating lib/rte_cryptodev_mingw with a custom command 00:24:53.210 [119/740] Generating lib/rte_cryptodev_def with a custom command 00:24:53.210 [120/740] Generating lib/rte_distributor_def with a custom command 00:24:53.210 [121/740] Generating lib/rte_efd_def with a custom command 00:24:53.210 [122/740] Generating lib/rte_distributor_mingw with a custom command 00:24:53.210 [123/740] Generating lib/rte_efd_mingw with a custom command 00:24:53.210 [124/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:24:53.210 [125/740] Generating lib/rte_eventdev_def with a custom command 00:24:53.210 [126/740] Generating lib/rte_eventdev_mingw with a custom command 00:24:53.210 [127/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:24:53.210 [128/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:24:53.210 [129/740] Generating lib/rte_gpudev_def with a custom command 00:24:53.210 [130/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:24:53.210 [131/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:24:53.210 [132/740] Generating lib/rte_gpudev_mingw with a custom command 00:24:53.210 [133/740] Linking target lib/librte_kvargs.so.23.0 00:24:53.472 [134/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:24:53.472 [135/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:24:53.472 [136/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:24:53.472 [137/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:24:53.472 [138/740] Generating lib/rte_gro_def with a custom command 00:24:53.472 [139/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:24:53.472 [140/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:24:53.472 [141/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:24:53.472 [142/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:24:53.472 [143/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:24:53.472 [144/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:24:53.472 [145/740] Generating lib/rte_gro_mingw with a custom command 00:24:53.472 [146/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:24:53.472 [147/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:24:53.472 [148/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:24:53.472 [149/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:24:53.472 [150/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:24:53.472 [151/740] Generating lib/rte_gso_def with a custom command 00:24:53.472 [152/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:24:53.472 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:24:53.472 [154/740] Linking static target lib/librte_cfgfile.a 00:24:53.472 [155/740] Generating lib/rte_gso_mingw with a custom command 00:24:53.472 [156/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:24:53.472 [157/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:24:53.472 [158/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:24:53.472 [159/740] Generating lib/rte_ip_frag_mingw with a custom command 00:24:53.472 [160/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:24:53.472 [161/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:24:53.472 [162/740] Generating lib/rte_ip_frag_def with a custom command 00:24:53.472 [163/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:24:53.472 [164/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:24:53.472 [165/740] Generating lib/rte_jobstats_def with a custom command 00:24:53.472 [166/740] Generating lib/rte_jobstats_mingw with a custom command 00:24:53.472 [167/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:24:53.472 [168/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:24:53.472 [169/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:24:53.472 [170/740] Linking static target lib/librte_metrics.a 00:24:53.472 [171/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:24:53.472 [172/740] Generating lib/rte_lpm_mingw with a custom command 00:24:53.472 [173/740] Generating lib/rte_latencystats_def with a custom command 00:24:53.472 [174/740] Generating lib/rte_latencystats_mingw with a custom command 00:24:53.472 [175/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:24:53.733 [176/740] Generating lib/rte_lpm_def with a custom command 00:24:53.734 [177/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:24:53.734 [178/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:24:53.734 [179/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:24:53.734 [180/740] Linking static target lib/librte_cmdline.a 00:24:53.734 [181/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:24:53.734 [182/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:24:53.734 [183/740] Generating lib/rte_member_mingw with a custom command 00:24:53.734 [184/740] Generating lib/rte_member_def with a custom command 00:24:53.734 [185/740] Generating lib/rte_pcapng_def with a custom command 00:24:53.734 [186/740] Generating lib/rte_pcapng_mingw with a custom command 00:24:53.734 [187/740] Linking static target lib/librte_timer.a 00:24:53.734 [188/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:24:53.734 [189/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:24:53.734 [190/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:24:53.734 [191/740] Linking static target lib/librte_telemetry.a 00:24:53.734 [192/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:24:53.734 [193/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:24:53.734 [194/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:24:53.734 [195/740] Linking static target lib/librte_net.a 00:24:53.734 [196/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:24:53.734 [197/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:24:53.734 [198/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:24:53.734 [199/740] Linking static target lib/librte_jobstats.a 00:24:53.734 [200/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:24:53.734 [201/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:24:53.734 [202/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:24:53.734 [203/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:24:53.734 [204/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:24:53.734 [205/740] Generating lib/rte_power_def with a custom command 00:24:53.734 [206/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:24:53.734 [207/740] Linking static target lib/librte_bitratestats.a 00:24:53.734 [208/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:24:53.734 [209/740] Generating lib/rte_power_mingw with a custom command 00:24:53.734 [210/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:24:53.734 [211/740] Generating lib/rte_rawdev_mingw with a custom command 00:24:53.734 [212/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:24:53.734 [213/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:24:53.734 [214/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:24:53.734 [215/740] Generating lib/rte_rawdev_def with a custom command 00:24:53.734 [216/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:24:53.734 [217/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:24:53.734 [218/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:24:53.734 [219/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:24:53.734 [220/740] Generating lib/rte_regexdev_def with a custom command 00:24:53.734 [221/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:24:53.734 [222/740] Generating lib/rte_dmadev_def with a custom command 00:24:53.734 [223/740] Generating lib/rte_regexdev_mingw with a custom command 00:24:53.734 [224/740] Generating lib/rte_dmadev_mingw with a custom command 00:24:53.734 [225/740] Generating lib/rte_rib_def with a custom command 00:24:53.734 [226/740] Generating lib/rte_rib_mingw with a custom command 00:24:53.734 [227/740] Generating lib/rte_reorder_def with a custom command 00:24:53.734 [228/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:24:53.734 [229/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:24:53.734 [230/740] Generating lib/rte_reorder_mingw with a custom command 00:24:53.734 [231/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:24:53.996 [232/740] Generating lib/rte_sched_def with a custom command 00:24:53.996 [233/740] Generating lib/rte_sched_mingw with a custom command 00:24:53.996 [234/740] Generating lib/rte_security_def with a custom command 00:24:53.996 [235/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:24:53.996 [236/740] Generating lib/rte_security_mingw with a custom command 00:24:53.996 [237/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:24:53.996 [238/740] Generating lib/rte_stack_def with a custom command 00:24:53.996 [239/740] Generating lib/rte_stack_mingw with a custom command 00:24:53.996 [240/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:24:53.996 [241/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:24:53.996 [242/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:24:53.996 [243/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:24:53.996 [244/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:24:53.996 [245/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:24:53.996 [246/740] Generating lib/rte_vhost_def with a custom command 00:24:53.996 [247/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:24:53.996 [248/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:24:53.996 [249/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:24:53.996 [250/740] Generating lib/rte_vhost_mingw with a custom command 00:24:53.996 [251/740] Linking static target lib/librte_compressdev.a 00:24:53.996 [252/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:24:53.996 [253/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:24:53.996 [254/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:24:53.996 [255/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:24:53.996 [256/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:24:53.996 [257/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:24:53.996 [258/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:24:53.996 [259/740] Generating lib/rte_ipsec_def with a custom command 00:24:53.996 [260/740] Linking static target lib/librte_stack.a 00:24:53.996 [261/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:24:53.996 [262/740] Generating lib/rte_ipsec_mingw with a custom command 00:24:53.996 [263/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:24:53.996 [264/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:24:53.996 [265/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:24:53.996 [266/740] Generating lib/rte_fib_mingw with a custom command 00:24:53.996 [267/740] Generating lib/rte_fib_def with a custom command 00:24:53.996 [268/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:24:53.996 [269/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:24:53.996 [270/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:24:53.996 [271/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:24:53.996 [272/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:24:54.258 [273/740] Linking static target lib/librte_mempool.a 00:24:54.258 [274/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:24:54.258 [275/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:24:54.258 [276/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:24:54.258 [277/740] Linking static target lib/librte_rcu.a 00:24:54.258 [278/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:24:54.258 [279/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:24:54.258 [280/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:24:54.258 [281/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:24:54.258 [282/740] Linking static target lib/librte_bbdev.a 00:24:54.258 [283/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:24:54.258 [284/740] Generating lib/rte_port_def with a custom command 00:24:54.258 [285/740] Generating lib/rte_port_mingw with a custom command 00:24:54.258 [286/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:24:54.258 [287/740] Linking target lib/librte_telemetry.so.23.0 00:24:54.258 [288/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:24:54.258 [289/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:24:54.258 [290/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:24:54.258 [291/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:24:54.258 [292/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:24:54.258 [293/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:24:54.258 [294/740] Generating lib/rte_pdump_mingw with a custom command 00:24:54.258 [295/740] Generating lib/rte_pdump_def with a custom command 00:24:54.258 [296/740] Linking static target lib/librte_rawdev.a 00:24:54.258 [297/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:24:54.258 [298/740] Linking static target lib/librte_gpudev.a 00:24:54.259 [299/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:24:54.259 [300/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:24:54.259 [301/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:24:54.259 [302/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:24:54.259 [303/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:24:54.259 [304/740] Linking static target lib/librte_dmadev.a 00:24:54.522 [305/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:24:54.522 [306/740] Linking static target lib/librte_gro.a 00:24:54.522 [307/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:24:54.522 [308/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:24:54.522 [309/740] Linking static target lib/librte_distributor.a 00:24:54.522 [310/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:24:54.522 [311/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:24:54.522 [312/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:24:54.522 [313/740] Linking static target lib/librte_latencystats.a 00:24:54.522 [314/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:24:54.522 [315/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:24:54.522 [316/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:24:54.522 [317/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:24:54.522 [318/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:24:54.522 [319/740] Linking static target lib/librte_gso.a 00:24:54.522 [320/740] Generating lib/rte_table_def with a custom command 00:24:54.522 [321/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:24:54.522 [322/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:24:54.522 [323/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:24:54.522 [324/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:24:54.522 [325/740] Generating lib/rte_table_mingw with a custom command 00:24:54.522 [326/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:24:54.522 [327/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:24:54.522 [328/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:24:54.522 [329/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:24:54.783 [330/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:24:54.784 [331/740] Linking static target lib/librte_ip_frag.a 00:24:54.784 [332/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:24:54.784 [333/740] Linking static target lib/librte_eal.a 00:24:54.784 [334/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:24:54.784 [335/740] Generating lib/rte_pipeline_def with a custom command 00:24:54.784 [336/740] Generating lib/rte_pipeline_mingw with a custom command 00:24:54.784 [337/740] Linking static target lib/librte_regexdev.a 00:24:54.784 [338/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:24:54.784 [339/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:24:54.784 [340/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:24:54.784 [341/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:24:54.784 [342/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:24:54.784 [343/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:24:54.784 [344/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:24:54.784 [345/740] Linking static target lib/librte_mbuf.a 00:24:54.784 [346/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:24:54.784 [347/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:24:54.784 [348/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:24:54.784 [349/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:24:54.784 [350/740] Generating lib/rte_graph_def with a custom command 00:24:54.784 [351/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:24:54.784 [352/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:24:54.784 [353/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:24:54.784 [354/740] Linking static target lib/librte_power.a 00:24:54.784 [355/740] Generating lib/rte_graph_mingw with a custom command 00:24:54.784 [356/740] Linking static target lib/librte_reorder.a 00:24:54.784 [357/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:24:54.784 [358/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:24:54.784 [359/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:24:54.784 [360/740] Linking static target lib/librte_pcapng.a 00:24:54.784 [361/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:24:54.784 [362/740] Generating lib/rte_node_def with a custom command 00:24:54.784 [363/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:24:55.046 [364/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:24:55.046 [365/740] Linking static target lib/librte_security.a 00:24:55.046 [366/740] Generating lib/rte_node_mingw with a custom command 00:24:55.046 [367/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:24:55.046 [368/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:24:55.046 [369/740] Linking static target lib/librte_bpf.a 00:24:55.046 [370/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.046 [371/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:24:55.046 [372/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.046 [373/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:24:55.046 [374/740] Generating drivers/rte_bus_pci_def with a custom command 00:24:55.046 [375/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:24:55.046 [376/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:24:55.046 [377/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:24:55.046 [378/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:24:55.046 [379/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.046 [380/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.046 [381/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:24:55.046 [382/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:24:55.046 [383/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:24:55.046 [384/740] Generating drivers/rte_bus_vdev_def with a custom command 00:24:55.046 [385/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:24:55.046 [386/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:24:55.046 [387/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:24:55.046 [388/740] Generating drivers/rte_mempool_ring_def with a custom command 00:24:55.046 [389/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:24:55.046 [390/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:24:55.310 [391/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:24:55.310 [392/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:24:55.310 [393/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:24:55.310 [394/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:24:55.310 [395/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.310 [396/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.310 [397/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:24:55.310 [398/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:24:55.310 [399/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.310 [400/740] Linking static target lib/librte_lpm.a 00:24:55.310 [401/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.310 [402/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:24:55.310 [403/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:24:55.310 [404/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:24:55.310 [405/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:24:55.310 [406/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:24:55.310 [407/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.310 [408/740] Generating drivers/rte_net_i40e_def with a custom command 00:24:55.310 [409/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:24:55.310 [410/740] Linking static target lib/librte_rib.a 00:24:55.310 [411/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:24:55.310 [412/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:24:55.310 [413/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:24:55.310 [414/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:24:55.310 [415/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:24:55.310 [416/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:24:55.310 [417/740] Linking static target lib/librte_efd.a 00:24:55.310 [418/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:24:55.310 [419/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:24:55.310 [420/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:24:55.573 [421/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:24:55.573 [422/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:24:55.573 [423/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:24:55.573 [424/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:24:55.573 [425/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.573 [426/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:24:55.573 [427/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:24:55.573 [428/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:24:55.573 [429/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:24:55.573 [430/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:24:55.573 [431/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:24:55.573 [432/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:24:55.573 [433/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:24:55.573 [434/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:24:55.573 [435/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.573 [436/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:24:55.573 [437/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:24:55.573 [438/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:24:55.573 [439/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:24:55.573 [440/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:24:55.573 [441/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.573 [442/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:24:55.573 [443/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:24:55.573 [444/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:24:55.573 [445/740] Linking static target lib/librte_graph.a 00:24:55.573 [446/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:24:55.573 [447/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.573 [448/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:24:55.839 [449/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.839 [450/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:24:55.839 [451/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:24:55.839 [452/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:24:55.839 [453/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:24:55.839 [454/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:24:55.839 [455/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.839 [456/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:24:55.839 [457/740] Linking static target lib/librte_fib.a 00:24:55.839 [458/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:24:55.839 [459/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:24:55.839 [460/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:24:55.839 [461/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:24:55.839 [462/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:24:55.839 [463/740] Linking static target drivers/librte_bus_vdev.a 00:24:55.839 [464/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:24:55.839 [465/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:24:55.839 [466/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:24:56.104 [467/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:24:56.104 [468/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:24:56.104 [469/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:24:56.104 [470/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:24:56.104 [471/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:24:56.104 [472/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:24:56.104 [473/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:24:56.104 [474/740] Linking static target lib/librte_pdump.a 00:24:56.104 [475/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:24:56.104 [476/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:24:56.366 [477/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:24:56.366 [478/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:24:56.366 [479/740] Linking static target drivers/librte_bus_pci.a 00:24:56.366 [480/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:24:56.366 [481/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:24:56.366 [482/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:56.366 [483/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:24:56.366 [484/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:24:56.366 [485/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:24:56.366 [486/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:24:56.366 [487/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:24:56.366 [488/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:24:56.366 [489/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:24:56.366 [490/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:24:56.366 [491/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:24:56.366 [492/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:24:56.366 [493/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:24:56.631 [494/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:24:56.631 [495/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:24:56.631 [496/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:24:56.631 [497/740] Linking static target lib/librte_table.a 00:24:56.631 [498/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:24:56.631 [499/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:24:56.631 [500/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:24:56.631 [501/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:24:56.631 [502/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:24:56.631 [503/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:24:56.631 [504/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:24:56.631 [505/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:24:56.631 [506/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:24:56.631 [507/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:24:56.631 [508/740] Linking static target lib/librte_cryptodev.a 00:24:56.631 [509/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:24:56.892 [510/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:24:56.892 [511/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:24:56.892 [512/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:24:56.892 [513/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:24:56.892 [514/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:24:56.892 [515/740] Linking static target lib/librte_sched.a 00:24:56.892 [516/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:24:56.892 [517/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:24:56.892 [518/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:24:56.892 [519/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:24:56.892 [520/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:24:56.892 [521/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:24:56.892 [522/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:24:56.892 [523/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:24:56.892 [524/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:24:56.892 [525/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:24:56.892 [526/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:24:56.892 [527/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:24:56.892 [528/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:24:56.892 [529/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:24:56.892 [530/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:24:56.892 [531/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:24:56.892 [532/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:24:56.892 [533/740] Linking static target lib/librte_ethdev.a 00:24:56.892 [534/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:24:56.892 [535/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:24:56.892 [536/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:24:56.892 [537/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:24:56.892 [538/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:24:56.892 [539/740] Linking static target lib/librte_node.a 00:24:57.150 [540/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:24:57.150 [541/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:24:57.150 [542/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:24:57.150 [543/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:24:57.150 [544/740] Linking static target lib/librte_ipsec.a 00:24:57.150 [545/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:24:57.150 [546/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:24:57.150 [547/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:24:57.150 [548/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:24:57.150 [549/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:24:57.150 [550/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:24:57.151 [551/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:24:57.151 [552/740] Linking static target lib/librte_member.a 00:24:57.151 [553/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:24:57.409 [554/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:24:57.409 [555/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:24:57.409 [556/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:24:57.409 [557/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:24:57.409 [558/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:24:57.409 [559/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:24:57.409 [560/740] Linking static target drivers/librte_mempool_ring.a 00:24:57.409 [561/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:24:57.409 [562/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:24:57.409 [563/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:24:57.409 [564/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:24:57.409 [565/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:24:57.409 [566/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:24:57.409 [567/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:24:57.409 [568/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:24:57.409 [569/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:24:57.409 [570/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:24:57.409 [571/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:24:57.409 [572/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:24:57.409 [573/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:24:57.409 [574/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:24:57.409 [575/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:24:57.409 [576/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:24:57.409 [577/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:24:57.409 [578/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:24:57.409 [579/740] Linking static target lib/librte_hash.a 00:24:57.409 [580/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:24:57.409 [581/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:24:57.409 [582/740] Linking static target lib/librte_port.a 00:24:57.409 [583/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:24:57.668 [584/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:24:57.668 [585/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:24:57.668 [586/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:24:57.668 [587/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:24:57.668 [588/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:24:57.668 [589/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:24:57.668 [590/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:24:57.668 [591/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:24:57.668 [592/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:24:57.668 [593/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:24:57.668 [594/740] Linking static target lib/librte_eventdev.a 00:24:57.668 [595/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:24:57.668 [596/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:24:57.668 [597/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:24:57.668 [598/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:24:57.925 [599/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:24:57.925 [600/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:24:57.925 [601/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:24:57.925 [602/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:24:57.925 [603/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:24:57.925 [604/740] Linking static target lib/librte_acl.a 00:24:57.925 [605/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:24:57.925 [606/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:24:58.182 [607/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:24:58.182 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:24:58.182 [609/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:24:58.182 [610/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:24:58.182 [611/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:24:58.182 [612/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:24:58.182 [613/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:24:58.440 [614/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:24:58.440 [615/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:24:58.698 [616/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:24:58.955 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:24:58.955 [618/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:24:59.521 [619/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:24:59.521 [620/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:24:59.521 [621/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:24:59.778 [622/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:25:00.035 [623/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:25:00.035 [624/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:25:00.293 [625/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:00.293 [626/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:25:00.550 [627/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:25:00.550 [628/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:25:00.550 [629/740] Linking static target drivers/librte_net_i40e.a 00:25:01.116 [630/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:25:01.116 [631/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:25:01.373 [632/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:25:01.373 [633/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:25:03.901 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:25:05.276 [635/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:25:05.534 [636/740] Linking target lib/librte_eal.so.23.0 00:25:05.534 [637/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:25:05.534 [638/740] Linking target lib/librte_pci.so.23.0 00:25:05.534 [639/740] Linking target lib/librte_timer.so.23.0 00:25:05.534 [640/740] Linking target lib/librte_jobstats.so.23.0 00:25:05.534 [641/740] Linking target lib/librte_ring.so.23.0 00:25:05.534 [642/740] Linking target lib/librte_meter.so.23.0 00:25:05.534 [643/740] Linking target lib/librte_cfgfile.so.23.0 00:25:05.534 [644/740] Linking target lib/librte_stack.so.23.0 00:25:05.534 [645/740] Linking target drivers/librte_bus_vdev.so.23.0 00:25:05.534 [646/740] Linking target lib/librte_rawdev.so.23.0 00:25:05.534 [647/740] Linking target lib/librte_dmadev.so.23.0 00:25:05.534 [648/740] Linking target lib/librte_acl.so.23.0 00:25:05.534 [649/740] Linking target lib/librte_graph.so.23.0 00:25:05.792 [650/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:25:05.792 [651/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:25:05.792 [652/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:25:05.792 [653/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:25:05.792 [654/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:25:05.792 [655/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:25:05.792 [656/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:25:05.792 [657/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:25:05.792 [658/740] Linking target drivers/librte_bus_pci.so.23.0 00:25:05.792 [659/740] Linking target lib/librte_mempool.so.23.0 00:25:05.792 [660/740] Linking target lib/librte_rcu.so.23.0 00:25:05.792 [661/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:25:05.792 [662/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:25:05.792 [663/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:25:06.050 [664/740] Linking target lib/librte_rib.so.23.0 00:25:06.050 [665/740] Linking target drivers/librte_mempool_ring.so.23.0 00:25:06.050 [666/740] Linking target lib/librte_mbuf.so.23.0 00:25:06.050 [667/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:25:06.050 [668/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:25:06.050 [669/740] Linking target lib/librte_fib.so.23.0 00:25:06.050 [670/740] Linking target lib/librte_cryptodev.so.23.0 00:25:06.050 [671/740] Linking target lib/librte_bbdev.so.23.0 00:25:06.050 [672/740] Linking target lib/librte_gpudev.so.23.0 00:25:06.050 [673/740] Linking target lib/librte_net.so.23.0 00:25:06.050 [674/740] Linking target lib/librte_reorder.so.23.0 00:25:06.050 [675/740] Linking target lib/librte_compressdev.so.23.0 00:25:06.050 [676/740] Linking target lib/librte_distributor.so.23.0 00:25:06.050 [677/740] Linking target lib/librte_regexdev.so.23.0 00:25:06.050 [678/740] Linking target lib/librte_sched.so.23.0 00:25:06.308 [679/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:25:06.308 [680/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:25:06.308 [681/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:25:06.308 [682/740] Linking target lib/librte_security.so.23.0 00:25:06.308 [683/740] Linking target lib/librte_hash.so.23.0 00:25:06.308 [684/740] Linking target lib/librte_cmdline.so.23.0 00:25:06.308 [685/740] Linking target lib/librte_ethdev.so.23.0 00:25:06.308 [686/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:25:06.567 [687/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:25:06.567 [688/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:25:06.567 [689/740] Linking target lib/librte_efd.so.23.0 00:25:06.567 [690/740] Linking target lib/librte_lpm.so.23.0 00:25:06.567 [691/740] Linking target lib/librte_member.so.23.0 00:25:06.567 [692/740] Linking target lib/librte_ipsec.so.23.0 00:25:06.567 [693/740] Linking target lib/librte_gso.so.23.0 00:25:06.567 [694/740] Linking target lib/librte_pcapng.so.23.0 00:25:06.567 [695/740] Linking target lib/librte_gro.so.23.0 00:25:06.567 [696/740] Linking target lib/librte_metrics.so.23.0 00:25:06.567 [697/740] Linking target lib/librte_ip_frag.so.23.0 00:25:06.567 [698/740] Linking target lib/librte_bpf.so.23.0 00:25:06.567 [699/740] Linking target lib/librte_power.so.23.0 00:25:06.567 [700/740] Linking target lib/librte_eventdev.so.23.0 00:25:06.567 [701/740] Linking target drivers/librte_net_i40e.so.23.0 00:25:06.567 [702/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:25:06.567 [703/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:25:06.567 [704/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:25:06.567 [705/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:25:06.567 [706/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:25:06.567 [707/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:25:06.567 [708/740] Linking target lib/librte_node.so.23.0 00:25:06.567 [709/740] Linking target lib/librte_latencystats.so.23.0 00:25:06.567 [710/740] Linking target lib/librte_bitratestats.so.23.0 00:25:06.824 [711/740] Linking target lib/librte_port.so.23.0 00:25:06.824 [712/740] Linking target lib/librte_pdump.so.23.0 00:25:06.824 [713/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:25:06.824 [714/740] Linking target lib/librte_table.so.23.0 00:25:07.083 [715/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:25:07.649 [716/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:25:07.649 [717/740] Linking static target lib/librte_vhost.a 00:25:07.907 [718/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:25:07.907 [719/740] Linking static target lib/librte_pipeline.a 00:25:08.165 [720/740] Linking target app/dpdk-test-cmdline 00:25:08.165 [721/740] Linking target app/dpdk-pdump 00:25:08.165 [722/740] Linking target app/dpdk-test-pipeline 00:25:08.165 [723/740] Linking target app/dpdk-test-sad 00:25:08.165 [724/740] Linking target app/dpdk-test-fib 00:25:08.165 [725/740] Linking target app/dpdk-test-compress-perf 00:25:08.165 [726/740] Linking target app/dpdk-test-acl 00:25:08.165 [727/740] Linking target app/dpdk-test-flow-perf 00:25:08.165 [728/740] Linking target app/dpdk-proc-info 00:25:08.165 [729/740] Linking target app/dpdk-test-gpudev 00:25:08.165 [730/740] Linking target app/dpdk-dumpcap 00:25:08.165 [731/740] Linking target app/dpdk-test-security-perf 00:25:08.165 [732/740] Linking target app/dpdk-test-crypto-perf 00:25:08.165 [733/740] Linking target app/dpdk-test-regex 00:25:08.456 [734/740] Linking target app/dpdk-test-bbdev 00:25:08.456 [735/740] Linking target app/dpdk-test-eventdev 00:25:08.456 [736/740] Linking target app/dpdk-testpmd 00:25:09.407 [737/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:25:09.407 [738/740] Linking target lib/librte_vhost.so.23.0 00:25:11.935 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:25:12.193 [740/740] Linking target lib/librte_pipeline.so.23.0 00:25:12.193 03:21:53 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:25:12.193 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:25:12.193 [0/1] Installing files. 00:25:12.455 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.455 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:25:12.456 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:25:12.457 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.458 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:25:12.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:25:12.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:25:12.461 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.461 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.462 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.723 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.723 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.723 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.723 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.723 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.723 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.723 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.723 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.723 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.723 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:25:12.723 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.723 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:25:12.723 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.723 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:25:12.723 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.723 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:25:12.723 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.724 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.725 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.726 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:25:12.727 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:25:12.727 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:25:12.727 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:25:12.727 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:25:12.727 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:25:12.727 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:25:12.727 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:25:12.727 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:25:12.727 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:25:12.727 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:25:12.727 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:25:12.727 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:25:12.727 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:25:12.727 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:25:12.727 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:25:12.727 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:25:12.727 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:25:12.727 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:25:12.727 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:25:12.727 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:25:12.727 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:25:12.727 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:25:12.727 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:25:12.727 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:25:12.727 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:25:12.727 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:25:12.727 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:25:12.727 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:25:12.727 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:25:12.727 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:25:12.727 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:25:12.727 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:25:12.727 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:25:12.727 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:25:12.727 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:25:12.727 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:25:12.727 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:25:12.727 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:25:12.727 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:25:12.727 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:25:12.727 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:25:12.727 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:25:12.727 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:25:12.727 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:25:12.727 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:25:12.727 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:25:12.727 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:25:12.727 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:25:12.728 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:25:12.728 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:25:12.728 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:25:12.728 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:25:12.728 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:25:12.728 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:25:12.728 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:25:12.728 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:25:12.728 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:25:12.728 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:25:12.728 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:25:12.728 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:25:12.728 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:25:12.728 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:25:12.728 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:25:12.728 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:25:12.728 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:25:12.728 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:25:12.728 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:25:12.728 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:25:12.728 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:25:12.728 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:25:12.728 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:25:12.728 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:25:12.728 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:25:12.728 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:25:12.728 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:25:12.728 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:25:12.728 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:25:12.728 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:25:12.728 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:25:12.728 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:25:12.728 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:25:12.728 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:25:12.728 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:25:12.728 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:25:12.728 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:25:12.728 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:25:12.728 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:25:12.728 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:25:12.728 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:25:12.728 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:25:12.728 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:25:12.728 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:25:12.728 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:25:12.728 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:25:12.728 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:25:12.728 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:25:12.728 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:25:12.728 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:25:12.728 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:25:12.728 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:25:12.728 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:25:12.728 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:25:12.728 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:25:12.728 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:25:12.728 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:25:12.728 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:25:12.728 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:25:12.728 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:25:12.728 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:25:12.728 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:25:12.728 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:25:12.728 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:25:12.728 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:25:12.728 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:25:12.728 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:25:12.728 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:25:12.728 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:25:12.728 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:25:12.728 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:25:12.728 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:25:12.728 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:25:12.728 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:25:12.728 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:25:12.728 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:25:12.728 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:25:12.728 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:25:12.728 03:21:54 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:25:12.728 03:21:54 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:25:12.728 03:21:54 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:25:12.728 03:21:54 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:25:12.728 00:25:12.728 real 0m25.441s 00:25:12.728 user 7m21.455s 00:25:12.728 sys 1m45.094s 00:25:12.728 03:21:54 build_native_dpdk -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:25:12.728 03:21:54 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:25:12.728 ************************************ 00:25:12.728 END TEST build_native_dpdk 00:25:12.728 ************************************ 00:25:12.728 03:21:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:25:12.728 03:21:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:25:12.728 03:21:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:25:12.728 03:21:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:25:12.728 03:21:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:25:12.728 03:21:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:25:12.728 03:21:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:25:12.728 03:21:54 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:25:12.986 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:25:12.986 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:25:12.986 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:25:13.244 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:25:13.501 Using 'verbs' RDMA provider 00:25:26.277 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:25:38.491 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:25:38.491 Creating mk/config.mk...done. 00:25:38.491 Creating mk/cc.flags.mk...done. 00:25:38.491 Type 'make' to build. 00:25:38.491 03:22:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:25:38.491 03:22:18 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:25:38.491 03:22:18 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:25:38.491 03:22:18 -- common/autotest_common.sh@10 -- $ set +x 00:25:38.491 ************************************ 00:25:38.491 START TEST make 00:25:38.491 ************************************ 00:25:38.491 03:22:18 make -- common/autotest_common.sh@1124 -- $ make -j96 00:25:38.491 make[1]: Nothing to be done for 'all'. 00:25:39.067 The Meson build system 00:25:39.067 Version: 1.3.1 00:25:39.067 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:25:39.067 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:25:39.067 Build type: native build 00:25:39.067 Project name: libvfio-user 00:25:39.067 Project version: 0.0.1 00:25:39.067 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:25:39.067 C linker for the host machine: gcc ld.bfd 2.39-16 00:25:39.067 Host machine cpu family: x86_64 00:25:39.067 Host machine cpu: x86_64 00:25:39.067 Run-time dependency threads found: YES 00:25:39.067 Library dl found: YES 00:25:39.067 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:25:39.067 Run-time dependency json-c found: YES 0.17 00:25:39.067 Run-time dependency cmocka found: YES 1.1.7 00:25:39.067 Program pytest-3 found: NO 00:25:39.067 Program flake8 found: NO 00:25:39.067 Program misspell-fixer found: NO 00:25:39.067 Program restructuredtext-lint found: NO 00:25:39.067 Program valgrind found: YES (/usr/bin/valgrind) 00:25:39.067 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:25:39.067 Compiler for C supports arguments -Wmissing-declarations: YES 00:25:39.067 Compiler for C supports arguments -Wwrite-strings: YES 00:25:39.067 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:25:39.067 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:25:39.067 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:25:39.067 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:25:39.067 Build targets in project: 8 00:25:39.067 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:25:39.067 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:25:39.067 00:25:39.067 libvfio-user 0.0.1 00:25:39.067 00:25:39.067 User defined options 00:25:39.067 buildtype : debug 00:25:39.067 default_library: shared 00:25:39.067 libdir : /usr/local/lib 00:25:39.067 00:25:39.067 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:25:39.632 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:25:39.632 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:25:39.632 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:25:39.632 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:25:39.633 [4/37] Compiling C object samples/null.p/null.c.o 00:25:39.633 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:25:39.633 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:25:39.633 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:25:39.633 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:25:39.633 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:25:39.633 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:25:39.633 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:25:39.633 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:25:39.633 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:25:39.633 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:25:39.633 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:25:39.633 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:25:39.633 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:25:39.633 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:25:39.633 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:25:39.633 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:25:39.633 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:25:39.890 [22/37] Compiling C object samples/server.p/server.c.o 00:25:39.890 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:25:39.890 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:25:39.890 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:25:39.890 [26/37] Compiling C object samples/client.p/client.c.o 00:25:39.890 [27/37] Linking target samples/client 00:25:39.890 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:25:39.890 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:25:39.890 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:25:39.890 [31/37] Linking target test/unit_tests 00:25:39.890 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:25:40.148 [33/37] Linking target samples/server 00:25:40.148 [34/37] Linking target samples/lspci 00:25:40.148 [35/37] Linking target samples/shadow_ioeventfd_server 00:25:40.148 [36/37] Linking target samples/null 00:25:40.148 [37/37] Linking target samples/gpio-pci-idio-16 00:25:40.148 INFO: autodetecting backend as ninja 00:25:40.148 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:25:40.148 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:25:40.406 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:25:40.406 ninja: no work to do. 00:25:48.511 CC lib/ut/ut.o 00:25:48.511 CC lib/log/log.o 00:25:48.511 CC lib/log/log_flags.o 00:25:48.511 CC lib/log/log_deprecated.o 00:25:48.511 CC lib/ut_mock/mock.o 00:25:48.511 LIB libspdk_ut.a 00:25:48.511 LIB libspdk_log.a 00:25:48.511 SO libspdk_ut.so.2.0 00:25:48.511 LIB libspdk_ut_mock.a 00:25:48.511 SO libspdk_log.so.7.0 00:25:48.511 SO libspdk_ut_mock.so.6.0 00:25:48.511 SYMLINK libspdk_ut.so 00:25:48.511 SYMLINK libspdk_ut_mock.so 00:25:48.511 SYMLINK libspdk_log.so 00:25:48.769 CC lib/ioat/ioat.o 00:25:48.769 CC lib/dma/dma.o 00:25:48.769 CXX lib/trace_parser/trace.o 00:25:48.769 CC lib/util/bit_array.o 00:25:48.769 CC lib/util/base64.o 00:25:48.769 CC lib/util/cpuset.o 00:25:48.769 CC lib/util/crc32.o 00:25:48.769 CC lib/util/crc16.o 00:25:48.769 CC lib/util/crc32c.o 00:25:48.769 CC lib/util/crc32_ieee.o 00:25:48.769 CC lib/util/crc64.o 00:25:48.769 CC lib/util/dif.o 00:25:48.769 CC lib/util/fd.o 00:25:48.769 CC lib/util/iov.o 00:25:48.769 CC lib/util/file.o 00:25:48.769 CC lib/util/hexlify.o 00:25:48.769 CC lib/util/math.o 00:25:48.769 CC lib/util/pipe.o 00:25:48.769 CC lib/util/strerror_tls.o 00:25:48.769 CC lib/util/uuid.o 00:25:48.769 CC lib/util/fd_group.o 00:25:48.769 CC lib/util/string.o 00:25:48.769 CC lib/util/xor.o 00:25:48.769 CC lib/util/zipf.o 00:25:49.027 CC lib/vfio_user/host/vfio_user_pci.o 00:25:49.027 CC lib/vfio_user/host/vfio_user.o 00:25:49.027 LIB libspdk_dma.a 00:25:49.027 SO libspdk_dma.so.4.0 00:25:49.027 LIB libspdk_ioat.a 00:25:49.027 SO libspdk_ioat.so.7.0 00:25:49.027 SYMLINK libspdk_dma.so 00:25:49.027 SYMLINK libspdk_ioat.so 00:25:49.027 LIB libspdk_vfio_user.a 00:25:49.296 SO libspdk_vfio_user.so.5.0 00:25:49.296 LIB libspdk_util.a 00:25:49.296 SYMLINK libspdk_vfio_user.so 00:25:49.296 SO libspdk_util.so.9.0 00:25:49.296 SYMLINK libspdk_util.so 00:25:49.597 LIB libspdk_trace_parser.a 00:25:49.597 SO libspdk_trace_parser.so.5.0 00:25:49.597 SYMLINK libspdk_trace_parser.so 00:25:49.597 CC lib/rdma/common.o 00:25:49.597 CC lib/rdma/rdma_verbs.o 00:25:49.597 CC lib/json/json_parse.o 00:25:49.597 CC lib/json/json_util.o 00:25:49.597 CC lib/json/json_write.o 00:25:49.597 CC lib/idxd/idxd.o 00:25:49.597 CC lib/idxd/idxd_user.o 00:25:49.597 CC lib/idxd/idxd_kernel.o 00:25:49.597 CC lib/conf/conf.o 00:25:49.597 CC lib/env_dpdk/env.o 00:25:49.597 CC lib/env_dpdk/memory.o 00:25:49.597 CC lib/vmd/vmd.o 00:25:49.597 CC lib/vmd/led.o 00:25:49.597 CC lib/env_dpdk/init.o 00:25:49.597 CC lib/env_dpdk/pci.o 00:25:49.597 CC lib/env_dpdk/threads.o 00:25:49.597 CC lib/env_dpdk/pci_ioat.o 00:25:49.597 CC lib/env_dpdk/pci_virtio.o 00:25:49.597 CC lib/env_dpdk/pci_vmd.o 00:25:49.597 CC lib/env_dpdk/sigbus_handler.o 00:25:49.597 CC lib/env_dpdk/pci_idxd.o 00:25:49.597 CC lib/env_dpdk/pci_event.o 00:25:49.855 CC lib/env_dpdk/pci_dpdk.o 00:25:49.855 CC lib/env_dpdk/pci_dpdk_2207.o 00:25:49.855 CC lib/env_dpdk/pci_dpdk_2211.o 00:25:49.855 LIB libspdk_conf.a 00:25:49.855 LIB libspdk_rdma.a 00:25:49.855 SO libspdk_conf.so.6.0 00:25:49.855 LIB libspdk_json.a 00:25:49.855 SO libspdk_rdma.so.6.0 00:25:50.114 SO libspdk_json.so.6.0 00:25:50.114 SYMLINK libspdk_conf.so 00:25:50.114 SYMLINK libspdk_rdma.so 00:25:50.114 SYMLINK libspdk_json.so 00:25:50.114 LIB libspdk_idxd.a 00:25:50.114 SO libspdk_idxd.so.12.0 00:25:50.114 LIB libspdk_vmd.a 00:25:50.371 SYMLINK libspdk_idxd.so 00:25:50.371 SO libspdk_vmd.so.6.0 00:25:50.371 SYMLINK libspdk_vmd.so 00:25:50.371 CC lib/jsonrpc/jsonrpc_server.o 00:25:50.371 CC lib/jsonrpc/jsonrpc_client.o 00:25:50.371 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:25:50.371 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:25:50.629 LIB libspdk_jsonrpc.a 00:25:50.629 SO libspdk_jsonrpc.so.6.0 00:25:50.629 SYMLINK libspdk_jsonrpc.so 00:25:50.629 LIB libspdk_env_dpdk.a 00:25:50.886 SO libspdk_env_dpdk.so.14.1 00:25:50.886 SYMLINK libspdk_env_dpdk.so 00:25:50.886 CC lib/rpc/rpc.o 00:25:51.144 LIB libspdk_rpc.a 00:25:51.144 SO libspdk_rpc.so.6.0 00:25:51.144 SYMLINK libspdk_rpc.so 00:25:51.402 CC lib/trace/trace.o 00:25:51.402 CC lib/trace/trace_flags.o 00:25:51.402 CC lib/trace/trace_rpc.o 00:25:51.661 CC lib/keyring/keyring_rpc.o 00:25:51.661 CC lib/keyring/keyring.o 00:25:51.661 CC lib/notify/notify.o 00:25:51.661 CC lib/notify/notify_rpc.o 00:25:51.661 LIB libspdk_notify.a 00:25:51.661 LIB libspdk_trace.a 00:25:51.661 SO libspdk_notify.so.6.0 00:25:51.661 SO libspdk_trace.so.10.0 00:25:51.661 LIB libspdk_keyring.a 00:25:51.661 SO libspdk_keyring.so.1.0 00:25:51.661 SYMLINK libspdk_notify.so 00:25:51.920 SYMLINK libspdk_trace.so 00:25:51.920 SYMLINK libspdk_keyring.so 00:25:52.178 CC lib/thread/thread.o 00:25:52.178 CC lib/thread/iobuf.o 00:25:52.178 CC lib/sock/sock.o 00:25:52.178 CC lib/sock/sock_rpc.o 00:25:52.437 LIB libspdk_sock.a 00:25:52.437 SO libspdk_sock.so.9.0 00:25:52.437 SYMLINK libspdk_sock.so 00:25:52.695 CC lib/nvme/nvme_ctrlr_cmd.o 00:25:52.695 CC lib/nvme/nvme_ctrlr.o 00:25:52.696 CC lib/nvme/nvme_ns_cmd.o 00:25:52.696 CC lib/nvme/nvme_fabric.o 00:25:52.696 CC lib/nvme/nvme_ns.o 00:25:52.696 CC lib/nvme/nvme_pcie_common.o 00:25:52.696 CC lib/nvme/nvme_pcie.o 00:25:52.696 CC lib/nvme/nvme_qpair.o 00:25:52.696 CC lib/nvme/nvme.o 00:25:52.696 CC lib/nvme/nvme_discovery.o 00:25:52.696 CC lib/nvme/nvme_quirks.o 00:25:52.696 CC lib/nvme/nvme_transport.o 00:25:52.696 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:25:52.696 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:25:52.696 CC lib/nvme/nvme_tcp.o 00:25:52.696 CC lib/nvme/nvme_opal.o 00:25:52.696 CC lib/nvme/nvme_io_msg.o 00:25:52.696 CC lib/nvme/nvme_poll_group.o 00:25:52.696 CC lib/nvme/nvme_zns.o 00:25:52.696 CC lib/nvme/nvme_stubs.o 00:25:52.696 CC lib/nvme/nvme_auth.o 00:25:52.696 CC lib/nvme/nvme_cuse.o 00:25:52.696 CC lib/nvme/nvme_vfio_user.o 00:25:52.696 CC lib/nvme/nvme_rdma.o 00:25:53.262 LIB libspdk_thread.a 00:25:53.262 SO libspdk_thread.so.10.0 00:25:53.262 SYMLINK libspdk_thread.so 00:25:53.520 CC lib/accel/accel.o 00:25:53.520 CC lib/vfu_tgt/tgt_endpoint.o 00:25:53.520 CC lib/vfu_tgt/tgt_rpc.o 00:25:53.520 CC lib/accel/accel_rpc.o 00:25:53.520 CC lib/blob/blobstore.o 00:25:53.520 CC lib/accel/accel_sw.o 00:25:53.520 CC lib/blob/request.o 00:25:53.520 CC lib/blob/zeroes.o 00:25:53.520 CC lib/init/json_config.o 00:25:53.520 CC lib/blob/blob_bs_dev.o 00:25:53.520 CC lib/init/rpc.o 00:25:53.520 CC lib/init/subsystem.o 00:25:53.520 CC lib/init/subsystem_rpc.o 00:25:53.520 CC lib/virtio/virtio.o 00:25:53.520 CC lib/virtio/virtio_vhost_user.o 00:25:53.520 CC lib/virtio/virtio_vfio_user.o 00:25:53.520 CC lib/virtio/virtio_pci.o 00:25:53.779 LIB libspdk_init.a 00:25:53.779 SO libspdk_init.so.5.0 00:25:53.779 LIB libspdk_vfu_tgt.a 00:25:53.779 LIB libspdk_virtio.a 00:25:53.779 SO libspdk_vfu_tgt.so.3.0 00:25:53.779 SO libspdk_virtio.so.7.0 00:25:53.779 SYMLINK libspdk_init.so 00:25:53.779 SYMLINK libspdk_vfu_tgt.so 00:25:53.779 SYMLINK libspdk_virtio.so 00:25:54.037 CC lib/event/app.o 00:25:54.037 CC lib/event/reactor.o 00:25:54.037 CC lib/event/log_rpc.o 00:25:54.037 CC lib/event/app_rpc.o 00:25:54.037 CC lib/event/scheduler_static.o 00:25:54.295 LIB libspdk_accel.a 00:25:54.295 SO libspdk_accel.so.15.0 00:25:54.295 LIB libspdk_nvme.a 00:25:54.295 SYMLINK libspdk_accel.so 00:25:54.555 LIB libspdk_event.a 00:25:54.555 SO libspdk_nvme.so.13.0 00:25:54.555 SO libspdk_event.so.13.1 00:25:54.555 SYMLINK libspdk_event.so 00:25:54.555 CC lib/bdev/bdev.o 00:25:54.555 CC lib/bdev/bdev_zone.o 00:25:54.555 CC lib/bdev/bdev_rpc.o 00:25:54.555 CC lib/bdev/scsi_nvme.o 00:25:54.555 CC lib/bdev/part.o 00:25:54.813 SYMLINK libspdk_nvme.so 00:25:55.754 LIB libspdk_blob.a 00:25:55.754 SO libspdk_blob.so.11.0 00:25:55.754 SYMLINK libspdk_blob.so 00:25:56.011 CC lib/lvol/lvol.o 00:25:56.011 CC lib/blobfs/blobfs.o 00:25:56.011 CC lib/blobfs/tree.o 00:25:56.575 LIB libspdk_bdev.a 00:25:56.575 SO libspdk_bdev.so.15.0 00:25:56.575 LIB libspdk_blobfs.a 00:25:56.575 SO libspdk_blobfs.so.10.0 00:25:56.575 SYMLINK libspdk_bdev.so 00:25:56.575 LIB libspdk_lvol.a 00:25:56.575 SYMLINK libspdk_blobfs.so 00:25:56.575 SO libspdk_lvol.so.10.0 00:25:56.575 SYMLINK libspdk_lvol.so 00:25:56.834 CC lib/ublk/ublk.o 00:25:56.834 CC lib/ublk/ublk_rpc.o 00:25:56.834 CC lib/scsi/lun.o 00:25:56.834 CC lib/scsi/dev.o 00:25:56.834 CC lib/scsi/port.o 00:25:56.834 CC lib/scsi/scsi.o 00:25:56.834 CC lib/scsi/scsi_bdev.o 00:25:56.834 CC lib/scsi/scsi_pr.o 00:25:56.834 CC lib/scsi/scsi_rpc.o 00:25:56.834 CC lib/scsi/task.o 00:25:56.834 CC lib/nvmf/ctrlr.o 00:25:56.834 CC lib/nvmf/ctrlr_discovery.o 00:25:56.834 CC lib/ftl/ftl_core.o 00:25:56.834 CC lib/nvmf/ctrlr_bdev.o 00:25:56.834 CC lib/ftl/ftl_layout.o 00:25:56.834 CC lib/ftl/ftl_init.o 00:25:56.834 CC lib/nvmf/subsystem.o 00:25:56.834 CC lib/nvmf/nvmf.o 00:25:56.834 CC lib/ftl/ftl_debug.o 00:25:56.834 CC lib/nvmf/nvmf_rpc.o 00:25:56.834 CC lib/ftl/ftl_io.o 00:25:56.834 CC lib/nvmf/transport.o 00:25:56.834 CC lib/ftl/ftl_sb.o 00:25:56.834 CC lib/nvmf/tcp.o 00:25:56.834 CC lib/ftl/ftl_l2p.o 00:25:56.834 CC lib/ftl/ftl_l2p_flat.o 00:25:56.834 CC lib/nvmf/stubs.o 00:25:56.834 CC lib/ftl/ftl_nv_cache.o 00:25:56.834 CC lib/nvmf/mdns_server.o 00:25:56.834 CC lib/ftl/ftl_band_ops.o 00:25:56.834 CC lib/nvmf/vfio_user.o 00:25:56.834 CC lib/ftl/ftl_band.o 00:25:56.834 CC lib/nvmf/rdma.o 00:25:56.834 CC lib/ftl/ftl_rq.o 00:25:56.834 CC lib/ftl/ftl_writer.o 00:25:56.834 CC lib/nvmf/auth.o 00:25:56.834 CC lib/ftl/ftl_reloc.o 00:25:56.834 CC lib/ftl/ftl_l2p_cache.o 00:25:56.834 CC lib/ftl/mngt/ftl_mngt.o 00:25:56.834 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:25:56.834 CC lib/ftl/ftl_p2l.o 00:25:56.834 CC lib/ftl/mngt/ftl_mngt_startup.o 00:25:56.834 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:25:56.834 CC lib/ftl/mngt/ftl_mngt_md.o 00:25:56.834 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:25:56.834 CC lib/ftl/mngt/ftl_mngt_misc.o 00:25:56.834 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:25:56.834 CC lib/nbd/nbd.o 00:25:56.834 CC lib/nbd/nbd_rpc.o 00:25:56.834 CC lib/ftl/mngt/ftl_mngt_band.o 00:25:56.834 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:25:56.834 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:25:56.834 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:25:56.834 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:25:56.834 CC lib/ftl/utils/ftl_md.o 00:25:56.834 CC lib/ftl/utils/ftl_conf.o 00:25:56.834 CC lib/ftl/utils/ftl_mempool.o 00:25:56.834 CC lib/ftl/utils/ftl_bitmap.o 00:25:56.834 CC lib/ftl/utils/ftl_property.o 00:25:56.834 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:25:56.834 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:25:56.834 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:25:56.834 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:25:56.834 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:25:56.834 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:25:56.834 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:25:56.834 CC lib/ftl/upgrade/ftl_sb_v5.o 00:25:56.834 CC lib/ftl/upgrade/ftl_sb_v3.o 00:25:56.834 CC lib/ftl/nvc/ftl_nvc_dev.o 00:25:56.834 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:25:56.834 CC lib/ftl/base/ftl_base_dev.o 00:25:56.834 CC lib/ftl/base/ftl_base_bdev.o 00:25:56.834 CC lib/ftl/ftl_trace.o 00:25:57.789 LIB libspdk_scsi.a 00:25:57.789 LIB libspdk_nbd.a 00:25:57.789 LIB libspdk_ublk.a 00:25:57.789 SO libspdk_nbd.so.7.0 00:25:57.789 SO libspdk_scsi.so.9.0 00:25:57.789 SO libspdk_ublk.so.3.0 00:25:57.789 SYMLINK libspdk_nbd.so 00:25:57.789 SYMLINK libspdk_ublk.so 00:25:57.789 SYMLINK libspdk_scsi.so 00:25:58.050 LIB libspdk_ftl.a 00:25:58.050 CC lib/iscsi/conn.o 00:25:58.050 CC lib/iscsi/md5.o 00:25:58.050 CC lib/iscsi/init_grp.o 00:25:58.050 CC lib/iscsi/iscsi.o 00:25:58.050 CC lib/iscsi/param.o 00:25:58.050 CC lib/iscsi/portal_grp.o 00:25:58.050 CC lib/iscsi/tgt_node.o 00:25:58.050 CC lib/iscsi/iscsi_subsystem.o 00:25:58.050 CC lib/vhost/vhost.o 00:25:58.050 CC lib/iscsi/iscsi_rpc.o 00:25:58.050 CC lib/vhost/vhost_rpc.o 00:25:58.050 CC lib/iscsi/task.o 00:25:58.050 CC lib/vhost/vhost_scsi.o 00:25:58.050 CC lib/vhost/vhost_blk.o 00:25:58.050 CC lib/vhost/rte_vhost_user.o 00:25:58.050 SO libspdk_ftl.so.9.0 00:25:58.307 SYMLINK libspdk_ftl.so 00:25:58.565 LIB libspdk_nvmf.a 00:25:58.565 SO libspdk_nvmf.so.18.1 00:25:58.824 LIB libspdk_vhost.a 00:25:58.824 SO libspdk_vhost.so.8.0 00:25:58.824 SYMLINK libspdk_nvmf.so 00:25:58.824 SYMLINK libspdk_vhost.so 00:25:58.824 LIB libspdk_iscsi.a 00:25:59.083 SO libspdk_iscsi.so.8.0 00:25:59.083 SYMLINK libspdk_iscsi.so 00:25:59.650 CC module/env_dpdk/env_dpdk_rpc.o 00:25:59.650 CC module/vfu_device/vfu_virtio.o 00:25:59.650 CC module/vfu_device/vfu_virtio_blk.o 00:25:59.650 CC module/vfu_device/vfu_virtio_scsi.o 00:25:59.650 CC module/vfu_device/vfu_virtio_rpc.o 00:25:59.650 CC module/accel/dsa/accel_dsa.o 00:25:59.650 CC module/accel/dsa/accel_dsa_rpc.o 00:25:59.650 CC module/sock/posix/posix.o 00:25:59.650 CC module/accel/iaa/accel_iaa.o 00:25:59.650 CC module/accel/iaa/accel_iaa_rpc.o 00:25:59.650 CC module/accel/error/accel_error.o 00:25:59.650 CC module/accel/error/accel_error_rpc.o 00:25:59.650 LIB libspdk_env_dpdk_rpc.a 00:25:59.650 CC module/keyring/linux/keyring.o 00:25:59.650 CC module/blob/bdev/blob_bdev.o 00:25:59.650 CC module/scheduler/gscheduler/gscheduler.o 00:25:59.650 CC module/keyring/linux/keyring_rpc.o 00:25:59.650 CC module/keyring/file/keyring.o 00:25:59.650 CC module/keyring/file/keyring_rpc.o 00:25:59.650 CC module/accel/ioat/accel_ioat_rpc.o 00:25:59.650 CC module/accel/ioat/accel_ioat.o 00:25:59.650 CC module/scheduler/dynamic/scheduler_dynamic.o 00:25:59.650 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:25:59.650 SO libspdk_env_dpdk_rpc.so.6.0 00:25:59.908 SYMLINK libspdk_env_dpdk_rpc.so 00:25:59.908 LIB libspdk_keyring_linux.a 00:25:59.908 LIB libspdk_scheduler_gscheduler.a 00:25:59.908 LIB libspdk_keyring_file.a 00:25:59.908 LIB libspdk_accel_error.a 00:25:59.908 LIB libspdk_scheduler_dpdk_governor.a 00:25:59.908 SO libspdk_keyring_linux.so.1.0 00:25:59.909 SO libspdk_scheduler_gscheduler.so.4.0 00:25:59.909 SO libspdk_keyring_file.so.1.0 00:25:59.909 SO libspdk_accel_error.so.2.0 00:25:59.909 LIB libspdk_accel_iaa.a 00:25:59.909 LIB libspdk_scheduler_dynamic.a 00:25:59.909 LIB libspdk_accel_ioat.a 00:25:59.909 LIB libspdk_accel_dsa.a 00:25:59.909 SO libspdk_scheduler_dpdk_governor.so.4.0 00:25:59.909 SO libspdk_scheduler_dynamic.so.4.0 00:25:59.909 SO libspdk_accel_iaa.so.3.0 00:25:59.909 SO libspdk_accel_ioat.so.6.0 00:25:59.909 SO libspdk_accel_dsa.so.5.0 00:25:59.909 SYMLINK libspdk_scheduler_gscheduler.so 00:25:59.909 SYMLINK libspdk_keyring_linux.so 00:25:59.909 SYMLINK libspdk_keyring_file.so 00:25:59.909 SYMLINK libspdk_accel_error.so 00:25:59.909 SYMLINK libspdk_scheduler_dpdk_governor.so 00:25:59.909 LIB libspdk_blob_bdev.a 00:25:59.909 SYMLINK libspdk_scheduler_dynamic.so 00:25:59.909 SYMLINK libspdk_accel_iaa.so 00:25:59.909 SYMLINK libspdk_accel_ioat.so 00:25:59.909 SO libspdk_blob_bdev.so.11.0 00:25:59.909 SYMLINK libspdk_accel_dsa.so 00:26:00.167 SYMLINK libspdk_blob_bdev.so 00:26:00.167 LIB libspdk_vfu_device.a 00:26:00.167 SO libspdk_vfu_device.so.3.0 00:26:00.167 SYMLINK libspdk_vfu_device.so 00:26:00.167 LIB libspdk_sock_posix.a 00:26:00.425 SO libspdk_sock_posix.so.6.0 00:26:00.425 SYMLINK libspdk_sock_posix.so 00:26:00.425 CC module/blobfs/bdev/blobfs_bdev.o 00:26:00.425 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:26:00.425 CC module/bdev/delay/vbdev_delay.o 00:26:00.425 CC module/bdev/delay/vbdev_delay_rpc.o 00:26:00.425 CC module/bdev/aio/bdev_aio_rpc.o 00:26:00.425 CC module/bdev/aio/bdev_aio.o 00:26:00.425 CC module/bdev/lvol/vbdev_lvol.o 00:26:00.425 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:26:00.425 CC module/bdev/zone_block/vbdev_zone_block.o 00:26:00.425 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:26:00.425 CC module/bdev/virtio/bdev_virtio_scsi.o 00:26:00.425 CC module/bdev/virtio/bdev_virtio_blk.o 00:26:00.425 CC module/bdev/virtio/bdev_virtio_rpc.o 00:26:00.425 CC module/bdev/passthru/vbdev_passthru.o 00:26:00.425 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:26:00.425 CC module/bdev/gpt/gpt.o 00:26:00.425 CC module/bdev/gpt/vbdev_gpt.o 00:26:00.425 CC module/bdev/nvme/bdev_nvme.o 00:26:00.425 CC module/bdev/split/vbdev_split.o 00:26:00.425 CC module/bdev/iscsi/bdev_iscsi.o 00:26:00.425 CC module/bdev/nvme/bdev_nvme_rpc.o 00:26:00.425 CC module/bdev/split/vbdev_split_rpc.o 00:26:00.425 CC module/bdev/nvme/nvme_rpc.o 00:26:00.425 CC module/bdev/malloc/bdev_malloc_rpc.o 00:26:00.425 CC module/bdev/malloc/bdev_malloc.o 00:26:00.425 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:26:00.425 CC module/bdev/nvme/bdev_mdns_client.o 00:26:00.425 CC module/bdev/error/vbdev_error.o 00:26:00.425 CC module/bdev/nvme/vbdev_opal.o 00:26:00.425 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:26:00.425 CC module/bdev/nvme/vbdev_opal_rpc.o 00:26:00.425 CC module/bdev/error/vbdev_error_rpc.o 00:26:00.425 CC module/bdev/null/bdev_null.o 00:26:00.425 CC module/bdev/null/bdev_null_rpc.o 00:26:00.425 CC module/bdev/ftl/bdev_ftl.o 00:26:00.425 CC module/bdev/ftl/bdev_ftl_rpc.o 00:26:00.425 CC module/bdev/raid/bdev_raid.o 00:26:00.425 CC module/bdev/raid/bdev_raid_rpc.o 00:26:00.425 CC module/bdev/raid/raid0.o 00:26:00.425 CC module/bdev/raid/bdev_raid_sb.o 00:26:00.425 CC module/bdev/raid/raid1.o 00:26:00.425 CC module/bdev/raid/concat.o 00:26:00.683 LIB libspdk_blobfs_bdev.a 00:26:00.683 SO libspdk_blobfs_bdev.so.6.0 00:26:00.940 LIB libspdk_bdev_error.a 00:26:00.940 SYMLINK libspdk_blobfs_bdev.so 00:26:00.940 LIB libspdk_bdev_split.a 00:26:00.940 LIB libspdk_bdev_null.a 00:26:00.940 SO libspdk_bdev_error.so.6.0 00:26:00.940 LIB libspdk_bdev_passthru.a 00:26:00.940 LIB libspdk_bdev_aio.a 00:26:00.940 SO libspdk_bdev_split.so.6.0 00:26:00.940 LIB libspdk_bdev_zone_block.a 00:26:00.940 LIB libspdk_bdev_gpt.a 00:26:00.940 SO libspdk_bdev_null.so.6.0 00:26:00.940 LIB libspdk_bdev_ftl.a 00:26:00.940 SO libspdk_bdev_aio.so.6.0 00:26:00.940 SO libspdk_bdev_passthru.so.6.0 00:26:00.940 SO libspdk_bdev_zone_block.so.6.0 00:26:00.940 SO libspdk_bdev_gpt.so.6.0 00:26:00.940 SYMLINK libspdk_bdev_error.so 00:26:00.940 SYMLINK libspdk_bdev_split.so 00:26:00.940 LIB libspdk_bdev_iscsi.a 00:26:00.940 SO libspdk_bdev_ftl.so.6.0 00:26:00.940 SYMLINK libspdk_bdev_null.so 00:26:00.940 LIB libspdk_bdev_delay.a 00:26:00.940 LIB libspdk_bdev_malloc.a 00:26:00.940 SYMLINK libspdk_bdev_passthru.so 00:26:00.940 SYMLINK libspdk_bdev_aio.so 00:26:00.940 SO libspdk_bdev_malloc.so.6.0 00:26:00.940 SO libspdk_bdev_iscsi.so.6.0 00:26:00.940 SYMLINK libspdk_bdev_gpt.so 00:26:00.940 SYMLINK libspdk_bdev_zone_block.so 00:26:00.940 SO libspdk_bdev_delay.so.6.0 00:26:00.940 SYMLINK libspdk_bdev_ftl.so 00:26:00.940 LIB libspdk_bdev_lvol.a 00:26:00.940 SYMLINK libspdk_bdev_iscsi.so 00:26:00.940 LIB libspdk_bdev_virtio.a 00:26:00.940 SYMLINK libspdk_bdev_delay.so 00:26:00.940 SYMLINK libspdk_bdev_malloc.so 00:26:00.940 SO libspdk_bdev_lvol.so.6.0 00:26:00.940 SO libspdk_bdev_virtio.so.6.0 00:26:01.197 SYMLINK libspdk_bdev_lvol.so 00:26:01.197 SYMLINK libspdk_bdev_virtio.so 00:26:01.456 LIB libspdk_bdev_raid.a 00:26:01.456 SO libspdk_bdev_raid.so.6.0 00:26:01.456 SYMLINK libspdk_bdev_raid.so 00:26:02.021 LIB libspdk_bdev_nvme.a 00:26:02.280 SO libspdk_bdev_nvme.so.7.0 00:26:02.280 SYMLINK libspdk_bdev_nvme.so 00:26:02.847 CC module/event/subsystems/keyring/keyring.o 00:26:02.847 CC module/event/subsystems/sock/sock.o 00:26:02.847 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:26:02.847 CC module/event/subsystems/iobuf/iobuf.o 00:26:02.847 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:26:02.847 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:26:02.847 CC module/event/subsystems/vmd/vmd_rpc.o 00:26:02.847 CC module/event/subsystems/vmd/vmd.o 00:26:02.847 CC module/event/subsystems/scheduler/scheduler.o 00:26:03.106 LIB libspdk_event_keyring.a 00:26:03.106 LIB libspdk_event_vhost_blk.a 00:26:03.106 LIB libspdk_event_sock.a 00:26:03.106 SO libspdk_event_keyring.so.1.0 00:26:03.106 LIB libspdk_event_scheduler.a 00:26:03.106 LIB libspdk_event_vmd.a 00:26:03.106 LIB libspdk_event_iobuf.a 00:26:03.106 LIB libspdk_event_vfu_tgt.a 00:26:03.106 SO libspdk_event_vhost_blk.so.3.0 00:26:03.106 SO libspdk_event_sock.so.5.0 00:26:03.106 SO libspdk_event_vmd.so.6.0 00:26:03.106 SO libspdk_event_scheduler.so.4.0 00:26:03.106 SO libspdk_event_iobuf.so.3.0 00:26:03.106 SO libspdk_event_vfu_tgt.so.3.0 00:26:03.106 SYMLINK libspdk_event_keyring.so 00:26:03.106 SYMLINK libspdk_event_vhost_blk.so 00:26:03.106 SYMLINK libspdk_event_sock.so 00:26:03.106 SYMLINK libspdk_event_vmd.so 00:26:03.106 SYMLINK libspdk_event_scheduler.so 00:26:03.106 SYMLINK libspdk_event_vfu_tgt.so 00:26:03.106 SYMLINK libspdk_event_iobuf.so 00:26:03.364 CC module/event/subsystems/accel/accel.o 00:26:03.623 LIB libspdk_event_accel.a 00:26:03.623 SO libspdk_event_accel.so.6.0 00:26:03.623 SYMLINK libspdk_event_accel.so 00:26:03.882 CC module/event/subsystems/bdev/bdev.o 00:26:04.140 LIB libspdk_event_bdev.a 00:26:04.140 SO libspdk_event_bdev.so.6.0 00:26:04.140 SYMLINK libspdk_event_bdev.so 00:26:04.399 CC module/event/subsystems/ublk/ublk.o 00:26:04.399 CC module/event/subsystems/nbd/nbd.o 00:26:04.399 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:26:04.399 CC module/event/subsystems/scsi/scsi.o 00:26:04.399 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:26:04.656 LIB libspdk_event_ublk.a 00:26:04.656 SO libspdk_event_ublk.so.3.0 00:26:04.656 LIB libspdk_event_nbd.a 00:26:04.656 SO libspdk_event_nbd.so.6.0 00:26:04.656 LIB libspdk_event_scsi.a 00:26:04.656 SYMLINK libspdk_event_ublk.so 00:26:04.656 SO libspdk_event_scsi.so.6.0 00:26:04.656 SYMLINK libspdk_event_nbd.so 00:26:04.656 LIB libspdk_event_nvmf.a 00:26:04.656 SYMLINK libspdk_event_scsi.so 00:26:04.656 SO libspdk_event_nvmf.so.6.0 00:26:04.914 SYMLINK libspdk_event_nvmf.so 00:26:04.914 CC module/event/subsystems/iscsi/iscsi.o 00:26:04.914 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:26:05.172 LIB libspdk_event_iscsi.a 00:26:05.172 LIB libspdk_event_vhost_scsi.a 00:26:05.172 SO libspdk_event_iscsi.so.6.0 00:26:05.172 SO libspdk_event_vhost_scsi.so.3.0 00:26:05.172 SYMLINK libspdk_event_iscsi.so 00:26:05.172 SYMLINK libspdk_event_vhost_scsi.so 00:26:05.430 SO libspdk.so.6.0 00:26:05.430 SYMLINK libspdk.so 00:26:05.692 CXX app/trace/trace.o 00:26:05.692 CC app/spdk_top/spdk_top.o 00:26:05.692 CC app/spdk_nvme_perf/perf.o 00:26:05.692 CC app/trace_record/trace_record.o 00:26:05.692 CC test/rpc_client/rpc_client_test.o 00:26:05.692 TEST_HEADER include/spdk/accel_module.h 00:26:05.692 TEST_HEADER include/spdk/accel.h 00:26:05.692 TEST_HEADER include/spdk/assert.h 00:26:05.692 TEST_HEADER include/spdk/barrier.h 00:26:05.692 TEST_HEADER include/spdk/base64.h 00:26:05.692 CC app/spdk_lspci/spdk_lspci.o 00:26:05.692 TEST_HEADER include/spdk/bdev_module.h 00:26:05.692 CC app/spdk_nvme_discover/discovery_aer.o 00:26:05.692 TEST_HEADER include/spdk/bit_array.h 00:26:05.692 TEST_HEADER include/spdk/bdev.h 00:26:05.693 CC app/spdk_nvme_identify/identify.o 00:26:05.693 TEST_HEADER include/spdk/bdev_zone.h 00:26:05.693 TEST_HEADER include/spdk/blob_bdev.h 00:26:05.693 TEST_HEADER include/spdk/bit_pool.h 00:26:05.693 TEST_HEADER include/spdk/blobfs_bdev.h 00:26:05.693 TEST_HEADER include/spdk/blob.h 00:26:05.693 TEST_HEADER include/spdk/conf.h 00:26:05.693 TEST_HEADER include/spdk/blobfs.h 00:26:05.693 TEST_HEADER include/spdk/cpuset.h 00:26:05.693 TEST_HEADER include/spdk/config.h 00:26:05.693 TEST_HEADER include/spdk/crc16.h 00:26:05.693 TEST_HEADER include/spdk/crc32.h 00:26:05.693 TEST_HEADER include/spdk/crc64.h 00:26:05.693 TEST_HEADER include/spdk/dif.h 00:26:05.693 TEST_HEADER include/spdk/dma.h 00:26:05.693 TEST_HEADER include/spdk/endian.h 00:26:05.693 TEST_HEADER include/spdk/env_dpdk.h 00:26:05.693 TEST_HEADER include/spdk/env.h 00:26:05.693 TEST_HEADER include/spdk/event.h 00:26:05.693 TEST_HEADER include/spdk/fd_group.h 00:26:05.693 TEST_HEADER include/spdk/fd.h 00:26:05.693 TEST_HEADER include/spdk/file.h 00:26:05.693 TEST_HEADER include/spdk/ftl.h 00:26:05.693 TEST_HEADER include/spdk/hexlify.h 00:26:05.693 TEST_HEADER include/spdk/gpt_spec.h 00:26:05.693 TEST_HEADER include/spdk/histogram_data.h 00:26:05.693 TEST_HEADER include/spdk/idxd_spec.h 00:26:05.693 TEST_HEADER include/spdk/idxd.h 00:26:05.693 TEST_HEADER include/spdk/init.h 00:26:05.693 TEST_HEADER include/spdk/ioat.h 00:26:05.693 TEST_HEADER include/spdk/ioat_spec.h 00:26:05.693 CC examples/interrupt_tgt/interrupt_tgt.o 00:26:05.693 TEST_HEADER include/spdk/iscsi_spec.h 00:26:05.693 TEST_HEADER include/spdk/json.h 00:26:05.693 TEST_HEADER include/spdk/jsonrpc.h 00:26:05.693 TEST_HEADER include/spdk/keyring.h 00:26:05.693 TEST_HEADER include/spdk/keyring_module.h 00:26:05.693 TEST_HEADER include/spdk/log.h 00:26:05.693 TEST_HEADER include/spdk/likely.h 00:26:05.693 CC app/spdk_dd/spdk_dd.o 00:26:05.693 TEST_HEADER include/spdk/lvol.h 00:26:05.693 CC app/iscsi_tgt/iscsi_tgt.o 00:26:05.693 TEST_HEADER include/spdk/memory.h 00:26:05.693 TEST_HEADER include/spdk/nbd.h 00:26:05.693 TEST_HEADER include/spdk/mmio.h 00:26:05.693 CC app/nvmf_tgt/nvmf_main.o 00:26:05.693 TEST_HEADER include/spdk/notify.h 00:26:05.693 TEST_HEADER include/spdk/nvme.h 00:26:05.693 TEST_HEADER include/spdk/nvme_intel.h 00:26:05.693 TEST_HEADER include/spdk/nvme_ocssd.h 00:26:05.693 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:26:05.693 TEST_HEADER include/spdk/nvme_zns.h 00:26:05.693 TEST_HEADER include/spdk/nvme_spec.h 00:26:05.693 TEST_HEADER include/spdk/nvmf_cmd.h 00:26:05.963 TEST_HEADER include/spdk/nvmf.h 00:26:05.963 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:26:05.963 TEST_HEADER include/spdk/nvmf_spec.h 00:26:05.963 TEST_HEADER include/spdk/nvmf_transport.h 00:26:05.963 TEST_HEADER include/spdk/opal.h 00:26:05.963 CC app/vhost/vhost.o 00:26:05.963 TEST_HEADER include/spdk/opal_spec.h 00:26:05.963 TEST_HEADER include/spdk/pci_ids.h 00:26:05.963 TEST_HEADER include/spdk/pipe.h 00:26:05.963 TEST_HEADER include/spdk/queue.h 00:26:05.963 TEST_HEADER include/spdk/reduce.h 00:26:05.963 TEST_HEADER include/spdk/scheduler.h 00:26:05.963 TEST_HEADER include/spdk/rpc.h 00:26:05.963 TEST_HEADER include/spdk/scsi.h 00:26:05.963 TEST_HEADER include/spdk/scsi_spec.h 00:26:05.963 TEST_HEADER include/spdk/sock.h 00:26:05.963 TEST_HEADER include/spdk/stdinc.h 00:26:05.963 TEST_HEADER include/spdk/string.h 00:26:05.963 TEST_HEADER include/spdk/trace.h 00:26:05.963 TEST_HEADER include/spdk/thread.h 00:26:05.963 TEST_HEADER include/spdk/trace_parser.h 00:26:05.963 TEST_HEADER include/spdk/tree.h 00:26:05.963 TEST_HEADER include/spdk/ublk.h 00:26:05.963 TEST_HEADER include/spdk/util.h 00:26:05.963 TEST_HEADER include/spdk/version.h 00:26:05.963 TEST_HEADER include/spdk/uuid.h 00:26:05.963 TEST_HEADER include/spdk/vfio_user_pci.h 00:26:05.963 CC app/spdk_tgt/spdk_tgt.o 00:26:05.963 TEST_HEADER include/spdk/vfio_user_spec.h 00:26:05.963 TEST_HEADER include/spdk/vhost.h 00:26:05.963 TEST_HEADER include/spdk/vmd.h 00:26:05.963 TEST_HEADER include/spdk/xor.h 00:26:05.963 TEST_HEADER include/spdk/zipf.h 00:26:05.963 CXX test/cpp_headers/accel.o 00:26:05.963 CXX test/cpp_headers/accel_module.o 00:26:05.963 CXX test/cpp_headers/assert.o 00:26:05.963 CXX test/cpp_headers/barrier.o 00:26:05.963 CXX test/cpp_headers/base64.o 00:26:05.963 CXX test/cpp_headers/bdev.o 00:26:05.963 CXX test/cpp_headers/bdev_zone.o 00:26:05.963 CXX test/cpp_headers/bdev_module.o 00:26:05.963 CXX test/cpp_headers/bit_array.o 00:26:05.963 CXX test/cpp_headers/bit_pool.o 00:26:05.963 CXX test/cpp_headers/blob_bdev.o 00:26:05.963 CXX test/cpp_headers/blobfs_bdev.o 00:26:05.963 CXX test/cpp_headers/blobfs.o 00:26:05.963 CXX test/cpp_headers/blob.o 00:26:05.963 CXX test/cpp_headers/conf.o 00:26:05.963 CXX test/cpp_headers/config.o 00:26:05.963 CXX test/cpp_headers/cpuset.o 00:26:05.963 CXX test/cpp_headers/crc16.o 00:26:05.963 CXX test/cpp_headers/crc32.o 00:26:05.963 CXX test/cpp_headers/crc64.o 00:26:05.963 CXX test/cpp_headers/dif.o 00:26:05.963 CXX test/cpp_headers/dma.o 00:26:05.963 CC test/nvme/e2edp/nvme_dp.o 00:26:05.963 CC test/env/memory/memory_ut.o 00:26:05.963 CC examples/ioat/perf/perf.o 00:26:05.963 CC test/app/histogram_perf/histogram_perf.o 00:26:05.963 CC test/env/vtophys/vtophys.o 00:26:05.963 CC test/nvme/reserve/reserve.o 00:26:05.963 CC test/nvme/boot_partition/boot_partition.o 00:26:05.963 CC test/env/pci/pci_ut.o 00:26:05.963 CC test/nvme/overhead/overhead.o 00:26:05.963 CC test/nvme/sgl/sgl.o 00:26:05.963 CC test/nvme/startup/startup.o 00:26:05.963 CC test/nvme/aer/aer.o 00:26:05.963 CC examples/ioat/verify/verify.o 00:26:05.963 CC test/nvme/simple_copy/simple_copy.o 00:26:05.963 CC test/nvme/compliance/nvme_compliance.o 00:26:05.963 CC test/app/stub/stub.o 00:26:05.963 CC test/nvme/reset/reset.o 00:26:05.963 CC test/nvme/fdp/fdp.o 00:26:05.963 CC test/event/reactor_perf/reactor_perf.o 00:26:05.963 CC test/nvme/cuse/cuse.o 00:26:05.963 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:26:05.963 CC examples/nvme/reconnect/reconnect.o 00:26:05.963 CC test/nvme/fused_ordering/fused_ordering.o 00:26:05.963 CC examples/nvme/hello_world/hello_world.o 00:26:05.963 CC examples/nvme/hotplug/hotplug.o 00:26:05.963 CC examples/accel/perf/accel_perf.o 00:26:05.963 CC test/event/reactor/reactor.o 00:26:05.963 CC test/nvme/connect_stress/connect_stress.o 00:26:05.963 CC test/nvme/doorbell_aers/doorbell_aers.o 00:26:05.963 CC test/nvme/err_injection/err_injection.o 00:26:05.963 CC examples/nvme/nvme_manage/nvme_manage.o 00:26:05.963 CC examples/nvme/cmb_copy/cmb_copy.o 00:26:05.963 CC test/app/jsoncat/jsoncat.o 00:26:05.963 CC examples/vmd/led/led.o 00:26:05.963 CC examples/vmd/lsvmd/lsvmd.o 00:26:05.963 CC test/event/event_perf/event_perf.o 00:26:05.963 CC examples/nvme/arbitration/arbitration.o 00:26:05.963 CC app/fio/nvme/fio_plugin.o 00:26:05.963 CC examples/util/zipf/zipf.o 00:26:05.963 CC examples/nvme/abort/abort.o 00:26:05.963 CC examples/sock/hello_world/hello_sock.o 00:26:05.963 CC examples/nvmf/nvmf/nvmf.o 00:26:05.963 CC examples/idxd/perf/perf.o 00:26:05.963 CC test/thread/poller_perf/poller_perf.o 00:26:05.963 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:26:05.963 CC examples/blob/cli/blobcli.o 00:26:05.963 CC test/app/bdev_svc/bdev_svc.o 00:26:05.963 CC examples/thread/thread/thread_ex.o 00:26:05.963 CC test/event/scheduler/scheduler.o 00:26:05.963 CC test/event/app_repeat/app_repeat.o 00:26:05.963 CC test/dma/test_dma/test_dma.o 00:26:05.963 CC examples/blob/hello_world/hello_blob.o 00:26:05.963 CC test/bdev/bdevio/bdevio.o 00:26:05.963 CC test/blobfs/mkfs/mkfs.o 00:26:05.963 CC examples/bdev/hello_world/hello_bdev.o 00:26:05.963 CC examples/bdev/bdevperf/bdevperf.o 00:26:06.234 CC test/accel/dif/dif.o 00:26:06.234 CC app/fio/bdev/fio_plugin.o 00:26:06.234 CC test/lvol/esnap/esnap.o 00:26:06.234 LINK rpc_client_test 00:26:06.234 LINK spdk_lspci 00:26:06.234 CC test/env/mem_callbacks/mem_callbacks.o 00:26:06.504 LINK vhost 00:26:06.504 LINK spdk_nvme_discover 00:26:06.504 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:26:06.504 LINK iscsi_tgt 00:26:06.504 LINK spdk_trace_record 00:26:06.504 LINK nvmf_tgt 00:26:06.504 LINK histogram_perf 00:26:06.504 LINK spdk_tgt 00:26:06.504 LINK interrupt_tgt 00:26:06.504 LINK boot_partition 00:26:06.504 LINK poller_perf 00:26:06.504 LINK event_perf 00:26:06.504 CXX test/cpp_headers/endian.o 00:26:06.504 CXX test/cpp_headers/env_dpdk.o 00:26:06.504 CXX test/cpp_headers/env.o 00:26:06.504 LINK env_dpdk_post_init 00:26:06.504 CXX test/cpp_headers/event.o 00:26:06.504 CXX test/cpp_headers/fd.o 00:26:06.504 CXX test/cpp_headers/fd_group.o 00:26:06.504 LINK cmb_copy 00:26:06.504 LINK app_repeat 00:26:06.504 CXX test/cpp_headers/file.o 00:26:06.504 LINK reserve 00:26:06.504 LINK connect_stress 00:26:06.504 CXX test/cpp_headers/ftl.o 00:26:06.504 CXX test/cpp_headers/gpt_spec.o 00:26:06.504 LINK vtophys 00:26:06.504 LINK reactor_perf 00:26:06.504 LINK lsvmd 00:26:06.504 LINK jsoncat 00:26:06.504 LINK reactor 00:26:06.504 CXX test/cpp_headers/hexlify.o 00:26:06.504 LINK led 00:26:06.504 LINK bdev_svc 00:26:06.504 LINK simple_copy 00:26:06.504 LINK zipf 00:26:06.504 LINK stub 00:26:06.504 LINK reset 00:26:06.504 LINK startup 00:26:06.504 LINK doorbell_aers 00:26:06.504 LINK sgl 00:26:06.504 CXX test/cpp_headers/histogram_data.o 00:26:06.504 LINK err_injection 00:26:06.504 LINK scheduler 00:26:06.504 LINK hotplug 00:26:06.504 CXX test/cpp_headers/idxd.o 00:26:06.504 CXX test/cpp_headers/idxd_spec.o 00:26:06.504 LINK pmr_persistence 00:26:06.504 CXX test/cpp_headers/init.o 00:26:06.504 LINK thread 00:26:06.504 CXX test/cpp_headers/ioat.o 00:26:06.504 CXX test/cpp_headers/ioat_spec.o 00:26:06.504 LINK ioat_perf 00:26:06.764 CXX test/cpp_headers/iscsi_spec.o 00:26:06.764 LINK aer 00:26:06.764 LINK hello_world 00:26:06.764 LINK mkfs 00:26:06.764 LINK fused_ordering 00:26:06.764 LINK hello_bdev 00:26:06.764 CXX test/cpp_headers/json.o 00:26:06.764 LINK verify 00:26:06.764 LINK spdk_trace 00:26:06.764 CXX test/cpp_headers/jsonrpc.o 00:26:06.764 LINK mem_callbacks 00:26:06.764 CXX test/cpp_headers/keyring.o 00:26:06.764 CXX test/cpp_headers/likely.o 00:26:06.764 CXX test/cpp_headers/log.o 00:26:06.764 CXX test/cpp_headers/keyring_module.o 00:26:06.764 LINK nvme_compliance 00:26:06.764 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:26:06.764 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:26:06.764 LINK hello_sock 00:26:06.764 CXX test/cpp_headers/lvol.o 00:26:06.764 LINK hello_blob 00:26:06.764 LINK spdk_dd 00:26:06.764 LINK overhead 00:26:06.764 LINK nvme_dp 00:26:06.764 CXX test/cpp_headers/memory.o 00:26:06.764 LINK pci_ut 00:26:06.764 CXX test/cpp_headers/mmio.o 00:26:06.764 CXX test/cpp_headers/nbd.o 00:26:06.764 LINK reconnect 00:26:06.764 LINK fdp 00:26:06.764 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:26:06.764 CXX test/cpp_headers/notify.o 00:26:06.764 CXX test/cpp_headers/nvme.o 00:26:06.764 CXX test/cpp_headers/nvme_intel.o 00:26:06.764 CXX test/cpp_headers/nvme_ocssd.o 00:26:06.764 CXX test/cpp_headers/nvme_ocssd_spec.o 00:26:06.764 CXX test/cpp_headers/nvme_spec.o 00:26:06.764 CXX test/cpp_headers/nvme_zns.o 00:26:06.764 CXX test/cpp_headers/nvmf_cmd.o 00:26:06.764 LINK bdevio 00:26:06.764 CXX test/cpp_headers/nvmf_fc_spec.o 00:26:06.764 CXX test/cpp_headers/nvmf.o 00:26:06.764 LINK nvmf 00:26:06.764 LINK test_dma 00:26:06.764 LINK arbitration 00:26:06.764 CXX test/cpp_headers/nvmf_spec.o 00:26:06.764 CXX test/cpp_headers/nvmf_transport.o 00:26:06.764 CXX test/cpp_headers/opal.o 00:26:06.764 CXX test/cpp_headers/opal_spec.o 00:26:06.764 LINK idxd_perf 00:26:06.764 CXX test/cpp_headers/pci_ids.o 00:26:06.764 CXX test/cpp_headers/pipe.o 00:26:07.023 CXX test/cpp_headers/queue.o 00:26:07.023 LINK abort 00:26:07.023 CXX test/cpp_headers/reduce.o 00:26:07.023 CXX test/cpp_headers/rpc.o 00:26:07.023 CXX test/cpp_headers/scheduler.o 00:26:07.023 LINK dif 00:26:07.023 CXX test/cpp_headers/scsi.o 00:26:07.023 CXX test/cpp_headers/scsi_spec.o 00:26:07.023 CXX test/cpp_headers/sock.o 00:26:07.023 CXX test/cpp_headers/stdinc.o 00:26:07.023 CXX test/cpp_headers/string.o 00:26:07.023 CXX test/cpp_headers/thread.o 00:26:07.023 LINK blobcli 00:26:07.023 CXX test/cpp_headers/trace.o 00:26:07.023 CXX test/cpp_headers/trace_parser.o 00:26:07.023 CXX test/cpp_headers/tree.o 00:26:07.023 CXX test/cpp_headers/ublk.o 00:26:07.023 CXX test/cpp_headers/util.o 00:26:07.023 CXX test/cpp_headers/uuid.o 00:26:07.023 CXX test/cpp_headers/version.o 00:26:07.023 CXX test/cpp_headers/vfio_user_pci.o 00:26:07.023 CXX test/cpp_headers/vfio_user_spec.o 00:26:07.023 CXX test/cpp_headers/vmd.o 00:26:07.023 CXX test/cpp_headers/vhost.o 00:26:07.023 CXX test/cpp_headers/xor.o 00:26:07.023 CXX test/cpp_headers/zipf.o 00:26:07.023 LINK spdk_bdev 00:26:07.023 LINK accel_perf 00:26:07.023 LINK nvme_fuzz 00:26:07.023 LINK nvme_manage 00:26:07.281 LINK memory_ut 00:26:07.281 LINK spdk_nvme 00:26:07.281 LINK spdk_top 00:26:07.281 LINK bdevperf 00:26:07.281 LINK spdk_nvme_perf 00:26:07.281 LINK spdk_nvme_identify 00:26:07.540 LINK vhost_fuzz 00:26:07.798 LINK cuse 00:26:08.364 LINK iscsi_fuzz 00:26:10.330 LINK esnap 00:26:10.330 00:26:10.330 real 0m32.916s 00:26:10.330 user 5m27.317s 00:26:10.330 sys 2m45.752s 00:26:10.330 03:22:51 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:26:10.331 03:22:51 make -- common/autotest_common.sh@10 -- $ set +x 00:26:10.331 ************************************ 00:26:10.331 END TEST make 00:26:10.331 ************************************ 00:26:10.331 03:22:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:26:10.331 03:22:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:10.331 03:22:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:10.331 03:22:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:10.331 03:22:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:26:10.331 03:22:51 -- pm/common@44 -- $ pid=1850081 00:26:10.331 03:22:51 -- pm/common@50 -- $ kill -TERM 1850081 00:26:10.331 03:22:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:10.331 03:22:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:26:10.331 03:22:51 -- pm/common@44 -- $ pid=1850083 00:26:10.331 03:22:51 -- pm/common@50 -- $ kill -TERM 1850083 00:26:10.331 03:22:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:10.331 03:22:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:26:10.331 03:22:51 -- pm/common@44 -- $ pid=1850085 00:26:10.331 03:22:51 -- pm/common@50 -- $ kill -TERM 1850085 00:26:10.331 03:22:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:10.331 03:22:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:26:10.331 03:22:51 -- pm/common@44 -- $ pid=1850111 00:26:10.331 03:22:51 -- pm/common@50 -- $ sudo -E kill -TERM 1850111 00:26:10.590 03:22:51 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:10.590 03:22:51 -- nvmf/common.sh@7 -- # uname -s 00:26:10.590 03:22:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.590 03:22:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.590 03:22:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.590 03:22:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.590 03:22:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.590 03:22:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.590 03:22:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.590 03:22:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.590 03:22:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.590 03:22:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.590 03:22:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:26:10.590 03:22:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:26:10.590 03:22:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.590 03:22:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.590 03:22:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:10.590 03:22:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.590 03:22:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:10.590 03:22:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.590 03:22:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.590 03:22:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.590 03:22:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.590 03:22:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.590 03:22:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.590 03:22:51 -- paths/export.sh@5 -- # export PATH 00:26:10.590 03:22:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.590 03:22:51 -- nvmf/common.sh@47 -- # : 0 00:26:10.590 03:22:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:10.590 03:22:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:10.590 03:22:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:10.590 03:22:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.590 03:22:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.590 03:22:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:10.590 03:22:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:10.590 03:22:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:10.590 03:22:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:26:10.590 03:22:51 -- spdk/autotest.sh@32 -- # uname -s 00:26:10.590 03:22:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:26:10.590 03:22:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:26:10.590 03:22:51 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:26:10.590 03:22:51 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:26:10.590 03:22:51 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:26:10.590 03:22:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:26:10.590 03:22:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:26:10.590 03:22:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:26:10.590 03:22:51 -- spdk/autotest.sh@48 -- # udevadm_pid=1922439 00:26:10.590 03:22:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:26:10.590 03:22:51 -- pm/common@17 -- # local monitor 00:26:10.590 03:22:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:26:10.590 03:22:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:26:10.590 03:22:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:26:10.590 03:22:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:26:10.590 03:22:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:26:10.590 03:22:51 -- pm/common@25 -- # sleep 1 00:26:10.590 03:22:51 -- pm/common@21 -- # date +%s 00:26:10.590 03:22:51 -- pm/common@21 -- # date +%s 00:26:10.590 03:22:51 -- pm/common@21 -- # date +%s 00:26:10.590 03:22:51 -- pm/common@21 -- # date +%s 00:26:10.590 03:22:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718068971 00:26:10.590 03:22:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718068971 00:26:10.590 03:22:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718068971 00:26:10.590 03:22:51 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718068971 00:26:10.590 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718068971_collect-vmstat.pm.log 00:26:10.590 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718068971_collect-cpu-load.pm.log 00:26:10.590 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718068971_collect-cpu-temp.pm.log 00:26:10.590 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718068971_collect-bmc-pm.bmc.pm.log 00:26:11.527 03:22:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:26:11.527 03:22:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:26:11.527 03:22:52 -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:11.527 03:22:52 -- common/autotest_common.sh@10 -- # set +x 00:26:11.527 03:22:52 -- spdk/autotest.sh@59 -- # create_test_list 00:26:11.527 03:22:52 -- common/autotest_common.sh@747 -- # xtrace_disable 00:26:11.527 03:22:52 -- common/autotest_common.sh@10 -- # set +x 00:26:11.527 03:22:52 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:26:11.527 03:22:52 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:26:11.527 03:22:52 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:26:11.527 03:22:52 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:26:11.527 03:22:52 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:26:11.527 03:22:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:26:11.527 03:22:52 -- common/autotest_common.sh@1454 -- # uname 00:26:11.527 03:22:52 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:26:11.527 03:22:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:26:11.527 03:22:52 -- common/autotest_common.sh@1474 -- # uname 00:26:11.527 03:22:52 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:26:11.527 03:22:52 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:26:11.527 03:22:52 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:26:11.527 03:22:52 -- spdk/autotest.sh@72 -- # hash lcov 00:26:11.527 03:22:52 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:11.527 03:22:52 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:26:11.527 --rc lcov_branch_coverage=1 00:26:11.527 --rc lcov_function_coverage=1 00:26:11.527 --rc genhtml_branch_coverage=1 00:26:11.527 --rc genhtml_function_coverage=1 00:26:11.527 --rc genhtml_legend=1 00:26:11.527 --rc geninfo_all_blocks=1 00:26:11.527 ' 00:26:11.527 03:22:52 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:26:11.527 --rc lcov_branch_coverage=1 00:26:11.527 --rc lcov_function_coverage=1 00:26:11.527 --rc genhtml_branch_coverage=1 00:26:11.527 --rc genhtml_function_coverage=1 00:26:11.527 --rc genhtml_legend=1 00:26:11.527 --rc geninfo_all_blocks=1 00:26:11.527 ' 00:26:11.527 03:22:52 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:26:11.527 --rc lcov_branch_coverage=1 00:26:11.527 --rc lcov_function_coverage=1 00:26:11.527 --rc genhtml_branch_coverage=1 00:26:11.527 --rc genhtml_function_coverage=1 00:26:11.527 --rc genhtml_legend=1 00:26:11.527 --rc geninfo_all_blocks=1 00:26:11.527 --no-external' 00:26:11.527 03:22:52 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:26:11.527 --rc lcov_branch_coverage=1 00:26:11.527 --rc lcov_function_coverage=1 00:26:11.527 --rc genhtml_branch_coverage=1 00:26:11.527 --rc genhtml_function_coverage=1 00:26:11.527 --rc genhtml_legend=1 00:26:11.527 --rc geninfo_all_blocks=1 00:26:11.527 --no-external' 00:26:11.527 03:22:52 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:26:11.785 lcov: LCOV version 1.14 00:26:11.785 03:22:52 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:26:19.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:26:19.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:26:32.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:26:32.108 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:26:32.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:26:32.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:26:32.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:26:32.110 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:26:33.046 03:23:14 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:26:33.046 03:23:14 -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:33.046 03:23:14 -- common/autotest_common.sh@10 -- # set +x 00:26:33.046 03:23:14 -- spdk/autotest.sh@91 -- # rm -f 00:26:33.046 03:23:14 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:35.580 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:26:35.580 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:26:35.580 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:26:35.580 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:26:35.580 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:26:35.580 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:26:35.580 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:26:35.580 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:26:35.580 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:26:35.580 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:26:35.580 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:26:35.580 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:26:35.580 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:26:35.580 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:26:35.839 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:26:35.839 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:26:35.839 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:26:35.839 03:23:17 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:26:35.839 03:23:17 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:26:35.839 03:23:17 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:26:35.839 03:23:17 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:26:35.839 03:23:17 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:26:35.839 03:23:17 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:26:35.839 03:23:17 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:26:35.839 03:23:17 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:35.839 03:23:17 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:26:35.839 03:23:17 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:26:35.839 03:23:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:26:35.839 03:23:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:26:35.839 03:23:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:26:35.839 03:23:17 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:26:35.839 03:23:17 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:26:35.839 No valid GPT data, bailing 00:26:35.839 03:23:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:35.839 03:23:17 -- scripts/common.sh@391 -- # pt= 00:26:35.839 03:23:17 -- scripts/common.sh@392 -- # return 1 00:26:35.839 03:23:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:26:35.839 1+0 records in 00:26:35.839 1+0 records out 00:26:35.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00600541 s, 175 MB/s 00:26:35.839 03:23:17 -- spdk/autotest.sh@118 -- # sync 00:26:35.839 03:23:17 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:26:35.839 03:23:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:26:35.839 03:23:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:26:40.036 03:23:21 -- spdk/autotest.sh@124 -- # uname -s 00:26:40.036 03:23:21 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:26:40.036 03:23:21 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:26:40.036 03:23:21 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:26:40.036 03:23:21 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:40.036 03:23:21 -- common/autotest_common.sh@10 -- # set +x 00:26:40.296 ************************************ 00:26:40.296 START TEST setup.sh 00:26:40.296 ************************************ 00:26:40.296 03:23:21 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:26:40.296 * Looking for test storage... 00:26:40.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:26:40.296 03:23:21 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:26:40.296 03:23:21 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:26:40.296 03:23:21 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:26:40.296 03:23:21 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:26:40.296 03:23:21 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:40.296 03:23:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:26:40.296 ************************************ 00:26:40.296 START TEST acl 00:26:40.296 ************************************ 00:26:40.296 03:23:21 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:26:40.296 * Looking for test storage... 00:26:40.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:26:40.296 03:23:21 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:26:40.296 03:23:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:26:40.296 03:23:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:26:40.296 03:23:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:26:40.296 03:23:21 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:26:40.296 03:23:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:26:40.296 03:23:21 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:26:40.296 03:23:21 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:40.296 03:23:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:26:40.296 03:23:21 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:26:40.296 03:23:21 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:26:40.296 03:23:21 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:26:40.296 03:23:21 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:26:40.296 03:23:21 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:26:40.296 03:23:21 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:26:40.296 03:23:21 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:43.584 03:23:24 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:26:43.584 03:23:24 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:26:43.584 03:23:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:43.584 03:23:24 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:26:43.584 03:23:24 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:26:43.584 03:23:24 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:26:46.875 Hugepages 00:26:46.875 node hugesize free / total 00:26:46.875 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:26:46.875 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:26:46.875 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.875 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:26:46.875 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:26:46.875 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.875 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:26:46.875 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:26:46.875 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.875 00:26:46.875 Type BDF Vendor Device NUMA Driver Device Block devices 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:26:46.876 03:23:27 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:26:46.876 03:23:27 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:26:46.876 03:23:27 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:46.876 03:23:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:26:46.876 ************************************ 00:26:46.876 START TEST denied 00:26:46.876 ************************************ 00:26:46.876 03:23:27 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:26:46.876 03:23:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5f:00.0' 00:26:46.876 03:23:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:26:46.876 03:23:27 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5f:00.0' 00:26:46.876 03:23:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:26:46.876 03:23:27 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:26:50.168 0000:5f:00.0 (8086 0a54): Skipping denied controller at 0000:5f:00.0 00:26:50.168 03:23:31 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5f:00.0 00:26:50.168 03:23:31 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:26:50.168 03:23:31 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:26:50.168 03:23:31 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5f:00.0 ]] 00:26:50.168 03:23:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5f:00.0/driver 00:26:50.168 03:23:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:26:50.168 03:23:31 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:26:50.168 03:23:31 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:26:50.168 03:23:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:26:50.168 03:23:31 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:53.493 00:26:53.493 real 0m6.746s 00:26:53.493 user 0m2.018s 00:26:53.493 sys 0m3.754s 00:26:53.493 03:23:34 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:53.493 03:23:34 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:26:53.493 ************************************ 00:26:53.493 END TEST denied 00:26:53.493 ************************************ 00:26:53.493 03:23:34 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:26:53.493 03:23:34 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:26:53.493 03:23:34 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:53.493 03:23:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:26:53.493 ************************************ 00:26:53.493 START TEST allowed 00:26:53.493 ************************************ 00:26:53.493 03:23:34 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:26:53.493 03:23:34 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5f:00.0 00:26:53.493 03:23:34 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:26:53.493 03:23:34 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5f:00.0 .*: nvme -> .*' 00:26:53.493 03:23:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:26:53.493 03:23:34 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:26:58.764 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:26:58.764 03:23:39 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:26:58.764 03:23:39 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:26:58.764 03:23:39 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:26:58.764 03:23:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:26:58.764 03:23:39 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:01.301 00:27:01.301 real 0m7.851s 00:27:01.301 user 0m2.258s 00:27:01.301 sys 0m4.127s 00:27:01.301 03:23:42 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:01.301 03:23:42 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:27:01.301 ************************************ 00:27:01.301 END TEST allowed 00:27:01.301 ************************************ 00:27:01.301 00:27:01.301 real 0m20.905s 00:27:01.301 user 0m6.609s 00:27:01.301 sys 0m11.947s 00:27:01.301 03:23:42 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:01.301 03:23:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:27:01.301 ************************************ 00:27:01.301 END TEST acl 00:27:01.301 ************************************ 00:27:01.301 03:23:42 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:27:01.301 03:23:42 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:01.301 03:23:42 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:01.301 03:23:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:27:01.301 ************************************ 00:27:01.301 START TEST hugepages 00:27:01.301 ************************************ 00:27:01.301 03:23:42 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:27:01.301 * Looking for test storage... 00:27:01.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 167212076 kB' 'MemAvailable: 170227112 kB' 'Buffers: 4132 kB' 'Cached: 15893688 kB' 'SwapCached: 0 kB' 'Active: 12997436 kB' 'Inactive: 3540920 kB' 'Active(anon): 12520464 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644352 kB' 'Mapped: 212752 kB' 'Shmem: 11879928 kB' 'KReclaimable: 282720 kB' 'Slab: 925496 kB' 'SReclaimable: 282720 kB' 'SUnreclaim: 642776 kB' 'KernelStack: 20944 kB' 'PageTables: 9484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982036 kB' 'Committed_AS: 14045140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318032 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.301 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.302 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:27:01.303 03:23:42 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:27:01.303 03:23:42 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:01.303 03:23:42 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:01.303 03:23:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:27:01.562 ************************************ 00:27:01.562 START TEST default_setup 00:27:01.562 ************************************ 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:27:01.562 03:23:42 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:04.853 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:04.853 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:04.853 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:04.853 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:04.853 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:04.853 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:04.853 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:04.853 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:04.853 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:04.853 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:04.853 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:04.853 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:04.853 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:04.853 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:04.853 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:04.853 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:06.236 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.236 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169383088 kB' 'MemAvailable: 172397948 kB' 'Buffers: 4132 kB' 'Cached: 15893804 kB' 'SwapCached: 0 kB' 'Active: 13014888 kB' 'Inactive: 3540920 kB' 'Active(anon): 12537916 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661124 kB' 'Mapped: 212916 kB' 'Shmem: 11880044 kB' 'KReclaimable: 282368 kB' 'Slab: 923008 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640640 kB' 'KernelStack: 21200 kB' 'PageTables: 9984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14063484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318128 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.237 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169390224 kB' 'MemAvailable: 172405084 kB' 'Buffers: 4132 kB' 'Cached: 15893808 kB' 'SwapCached: 0 kB' 'Active: 13015264 kB' 'Inactive: 3540920 kB' 'Active(anon): 12538292 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661504 kB' 'Mapped: 212904 kB' 'Shmem: 11880048 kB' 'KReclaimable: 282368 kB' 'Slab: 923004 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640636 kB' 'KernelStack: 21200 kB' 'PageTables: 9424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14064992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318160 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.238 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.239 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169390416 kB' 'MemAvailable: 172405276 kB' 'Buffers: 4132 kB' 'Cached: 15893828 kB' 'SwapCached: 0 kB' 'Active: 13014028 kB' 'Inactive: 3540920 kB' 'Active(anon): 12537056 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660196 kB' 'Mapped: 212904 kB' 'Shmem: 11880068 kB' 'KReclaimable: 282368 kB' 'Slab: 923100 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640732 kB' 'KernelStack: 20976 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14063520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318128 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.240 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.241 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:27:06.242 nr_hugepages=1024 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:27:06.242 resv_hugepages=0 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:27:06.242 surplus_hugepages=0 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:27:06.242 anon_hugepages=0 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169388008 kB' 'MemAvailable: 172402868 kB' 'Buffers: 4132 kB' 'Cached: 15893848 kB' 'SwapCached: 0 kB' 'Active: 13014980 kB' 'Inactive: 3540920 kB' 'Active(anon): 12538008 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661144 kB' 'Mapped: 212904 kB' 'Shmem: 11880088 kB' 'KReclaimable: 282368 kB' 'Slab: 923196 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640828 kB' 'KernelStack: 21136 kB' 'PageTables: 9808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14063544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318256 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.242 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.243 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 82105288 kB' 'MemUsed: 15510340 kB' 'SwapCached: 0 kB' 'Active: 8823112 kB' 'Inactive: 3343336 kB' 'Active(anon): 8604980 kB' 'Inactive(anon): 0 kB' 'Active(file): 218132 kB' 'Inactive(file): 3343336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11762616 kB' 'Mapped: 143440 kB' 'AnonPages: 407448 kB' 'Shmem: 8201148 kB' 'KernelStack: 13384 kB' 'PageTables: 7144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 187364 kB' 'Slab: 536132 kB' 'SReclaimable: 187364 kB' 'SUnreclaim: 348768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.244 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:27:06.245 node0=1024 expecting 1024 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:27:06.245 00:27:06.245 real 0m4.796s 00:27:06.245 user 0m1.378s 00:27:06.245 sys 0m2.071s 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:06.245 03:23:47 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:27:06.245 ************************************ 00:27:06.245 END TEST default_setup 00:27:06.245 ************************************ 00:27:06.245 03:23:47 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:27:06.245 03:23:47 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:06.245 03:23:47 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:06.245 03:23:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:27:06.245 ************************************ 00:27:06.245 START TEST per_node_1G_alloc 00:27:06.245 ************************************ 00:27:06.245 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:27:06.245 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:27:06.245 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:27:06.245 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:27:06.246 03:23:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:09.538 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:09.538 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:09.538 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:09.538 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:09.538 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:09.538 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:09.538 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:09.538 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:09.539 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:09.539 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:09.539 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:09.539 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:09.539 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:09.539 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:09.539 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:09.539 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:09.539 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169385236 kB' 'MemAvailable: 172400096 kB' 'Buffers: 4132 kB' 'Cached: 15893948 kB' 'SwapCached: 0 kB' 'Active: 13014792 kB' 'Inactive: 3540920 kB' 'Active(anon): 12537820 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660980 kB' 'Mapped: 211924 kB' 'Shmem: 11880188 kB' 'KReclaimable: 282368 kB' 'Slab: 922816 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640448 kB' 'KernelStack: 20960 kB' 'PageTables: 9132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14051892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318160 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.539 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.540 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169386288 kB' 'MemAvailable: 172401148 kB' 'Buffers: 4132 kB' 'Cached: 15893952 kB' 'SwapCached: 0 kB' 'Active: 13014436 kB' 'Inactive: 3540920 kB' 'Active(anon): 12537464 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660616 kB' 'Mapped: 211916 kB' 'Shmem: 11880192 kB' 'KReclaimable: 282368 kB' 'Slab: 922812 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640444 kB' 'KernelStack: 20928 kB' 'PageTables: 9036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14051908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318144 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.541 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.542 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:09.543 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169386860 kB' 'MemAvailable: 172401720 kB' 'Buffers: 4132 kB' 'Cached: 15893972 kB' 'SwapCached: 0 kB' 'Active: 13014528 kB' 'Inactive: 3540920 kB' 'Active(anon): 12537556 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660684 kB' 'Mapped: 211916 kB' 'Shmem: 11880212 kB' 'KReclaimable: 282368 kB' 'Slab: 922812 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640444 kB' 'KernelStack: 20896 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14051932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318144 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.544 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.545 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:27:09.546 nr_hugepages=1024 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:27:09.546 resv_hugepages=0 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:27:09.546 surplus_hugepages=0 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:27:09.546 anon_hugepages=0 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169387224 kB' 'MemAvailable: 172402084 kB' 'Buffers: 4132 kB' 'Cached: 15894012 kB' 'SwapCached: 0 kB' 'Active: 13014148 kB' 'Inactive: 3540920 kB' 'Active(anon): 12537176 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660240 kB' 'Mapped: 211916 kB' 'Shmem: 11880252 kB' 'KReclaimable: 282368 kB' 'Slab: 922812 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640444 kB' 'KernelStack: 20912 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14051956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318128 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.546 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.547 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 83161876 kB' 'MemUsed: 14453752 kB' 'SwapCached: 0 kB' 'Active: 8821100 kB' 'Inactive: 3343336 kB' 'Active(anon): 8602968 kB' 'Inactive(anon): 0 kB' 'Active(file): 218132 kB' 'Inactive(file): 3343336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11762620 kB' 'Mapped: 143060 kB' 'AnonPages: 404904 kB' 'Shmem: 8201152 kB' 'KernelStack: 13048 kB' 'PageTables: 6040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 187364 kB' 'Slab: 536172 kB' 'SReclaimable: 187364 kB' 'SUnreclaim: 348808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.548 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.549 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765540 kB' 'MemFree: 86225096 kB' 'MemUsed: 7540444 kB' 'SwapCached: 0 kB' 'Active: 4193776 kB' 'Inactive: 197584 kB' 'Active(anon): 3934936 kB' 'Inactive(anon): 0 kB' 'Active(file): 258840 kB' 'Inactive(file): 197584 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4135548 kB' 'Mapped: 68856 kB' 'AnonPages: 256036 kB' 'Shmem: 3679124 kB' 'KernelStack: 7896 kB' 'PageTables: 3052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95004 kB' 'Slab: 386640 kB' 'SReclaimable: 95004 kB' 'SUnreclaim: 291636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.550 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:27:09.551 node0=512 expecting 512 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:27:09.551 node1=512 expecting 512 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:27:09.551 00:27:09.551 real 0m3.301s 00:27:09.551 user 0m1.296s 00:27:09.551 sys 0m2.077s 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:09.551 03:23:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:27:09.551 ************************************ 00:27:09.551 END TEST per_node_1G_alloc 00:27:09.551 ************************************ 00:27:09.551 03:23:50 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:27:09.551 03:23:50 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:09.551 03:23:50 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:09.551 03:23:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:27:09.810 ************************************ 00:27:09.810 START TEST even_2G_alloc 00:27:09.810 ************************************ 00:27:09.810 03:23:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:27:09.810 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:27:09.810 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:27:09.810 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:27:09.810 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:27:09.811 03:23:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:12.343 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:12.343 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:12.343 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:12.343 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:12.343 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:12.343 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:12.343 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:12.343 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:12.343 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:12.343 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:12.343 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:12.343 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:12.343 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:12.343 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:12.343 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:12.343 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:12.343 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169376304 kB' 'MemAvailable: 172391164 kB' 'Buffers: 4132 kB' 'Cached: 15894108 kB' 'SwapCached: 0 kB' 'Active: 13015656 kB' 'Inactive: 3540920 kB' 'Active(anon): 12538684 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661540 kB' 'Mapped: 212308 kB' 'Shmem: 11880348 kB' 'KReclaimable: 282368 kB' 'Slab: 922876 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640508 kB' 'KernelStack: 20928 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14052440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318144 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.343 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.344 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169379064 kB' 'MemAvailable: 172393924 kB' 'Buffers: 4132 kB' 'Cached: 15894112 kB' 'SwapCached: 0 kB' 'Active: 13015248 kB' 'Inactive: 3540920 kB' 'Active(anon): 12538276 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661128 kB' 'Mapped: 211928 kB' 'Shmem: 11880352 kB' 'KReclaimable: 282368 kB' 'Slab: 922824 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640456 kB' 'KernelStack: 20912 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14052456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318128 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.345 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.346 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169379340 kB' 'MemAvailable: 172394200 kB' 'Buffers: 4132 kB' 'Cached: 15894132 kB' 'SwapCached: 0 kB' 'Active: 13015276 kB' 'Inactive: 3540920 kB' 'Active(anon): 12538304 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661128 kB' 'Mapped: 211928 kB' 'Shmem: 11880372 kB' 'KReclaimable: 282368 kB' 'Slab: 922824 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640456 kB' 'KernelStack: 20912 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14052480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318128 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.347 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.609 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:27:12.610 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:27:12.611 nr_hugepages=1024 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:27:12.611 resv_hugepages=0 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:27:12.611 surplus_hugepages=0 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:27:12.611 anon_hugepages=0 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169379856 kB' 'MemAvailable: 172394716 kB' 'Buffers: 4132 kB' 'Cached: 15894152 kB' 'SwapCached: 0 kB' 'Active: 13015288 kB' 'Inactive: 3540920 kB' 'Active(anon): 12538316 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661128 kB' 'Mapped: 211928 kB' 'Shmem: 11880392 kB' 'KReclaimable: 282368 kB' 'Slab: 922824 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640456 kB' 'KernelStack: 20912 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14052500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318128 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.611 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.612 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 83149728 kB' 'MemUsed: 14465900 kB' 'SwapCached: 0 kB' 'Active: 8821212 kB' 'Inactive: 3343336 kB' 'Active(anon): 8603080 kB' 'Inactive(anon): 0 kB' 'Active(file): 218132 kB' 'Inactive(file): 3343336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11762640 kB' 'Mapped: 143060 kB' 'AnonPages: 405032 kB' 'Shmem: 8201172 kB' 'KernelStack: 13032 kB' 'PageTables: 6084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 187364 kB' 'Slab: 536304 kB' 'SReclaimable: 187364 kB' 'SUnreclaim: 348940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.613 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.614 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765540 kB' 'MemFree: 86230128 kB' 'MemUsed: 7535412 kB' 'SwapCached: 0 kB' 'Active: 4193744 kB' 'Inactive: 197584 kB' 'Active(anon): 3934904 kB' 'Inactive(anon): 0 kB' 'Active(file): 258840 kB' 'Inactive(file): 197584 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4135668 kB' 'Mapped: 68868 kB' 'AnonPages: 255668 kB' 'Shmem: 3679244 kB' 'KernelStack: 7864 kB' 'PageTables: 2912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95004 kB' 'Slab: 386520 kB' 'SReclaimable: 95004 kB' 'SUnreclaim: 291516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.615 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:27:12.616 node0=512 expecting 512 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:27:12.616 node1=512 expecting 512 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:27:12.616 00:27:12.616 real 0m2.909s 00:27:12.616 user 0m1.113s 00:27:12.616 sys 0m1.830s 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:12.616 03:23:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:27:12.616 ************************************ 00:27:12.616 END TEST even_2G_alloc 00:27:12.616 ************************************ 00:27:12.616 03:23:53 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:27:12.616 03:23:53 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:12.616 03:23:53 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:12.616 03:23:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:27:12.616 ************************************ 00:27:12.616 START TEST odd_alloc 00:27:12.616 ************************************ 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:27:12.616 03:23:53 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:15.907 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:15.907 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:15.907 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:15.907 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:15.907 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:15.907 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:15.907 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:15.907 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:15.907 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:15.907 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:15.907 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:15.907 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:15.907 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:15.907 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:15.907 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:15.907 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:15.907 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169386388 kB' 'MemAvailable: 172401248 kB' 'Buffers: 4132 kB' 'Cached: 15894248 kB' 'SwapCached: 0 kB' 'Active: 13016028 kB' 'Inactive: 3540920 kB' 'Active(anon): 12539056 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661296 kB' 'Mapped: 212024 kB' 'Shmem: 11880488 kB' 'KReclaimable: 282368 kB' 'Slab: 921880 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 639512 kB' 'KernelStack: 20944 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029588 kB' 'Committed_AS: 14053128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318160 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.907 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.908 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169386392 kB' 'MemAvailable: 172401252 kB' 'Buffers: 4132 kB' 'Cached: 15894248 kB' 'SwapCached: 0 kB' 'Active: 13017572 kB' 'Inactive: 3540920 kB' 'Active(anon): 12540600 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662852 kB' 'Mapped: 212520 kB' 'Shmem: 11880488 kB' 'KReclaimable: 282368 kB' 'Slab: 921880 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 639512 kB' 'KernelStack: 20896 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029588 kB' 'Committed_AS: 14055292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318112 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.909 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.910 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.911 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169382788 kB' 'MemAvailable: 172397648 kB' 'Buffers: 4132 kB' 'Cached: 15894248 kB' 'SwapCached: 0 kB' 'Active: 13020700 kB' 'Inactive: 3540920 kB' 'Active(anon): 12543728 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 667004 kB' 'Mapped: 212444 kB' 'Shmem: 11880488 kB' 'KReclaimable: 282368 kB' 'Slab: 921856 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 639488 kB' 'KernelStack: 20912 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029588 kB' 'Committed_AS: 14059284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318112 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.912 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.913 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:27:15.914 nr_hugepages=1025 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:27:15.914 resv_hugepages=0 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:27:15.914 surplus_hugepages=0 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:27:15.914 anon_hugepages=0 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169389716 kB' 'MemAvailable: 172404576 kB' 'Buffers: 4132 kB' 'Cached: 15894292 kB' 'SwapCached: 0 kB' 'Active: 13016160 kB' 'Inactive: 3540920 kB' 'Active(anon): 12539188 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661904 kB' 'Mapped: 212292 kB' 'Shmem: 11880532 kB' 'KReclaimable: 282368 kB' 'Slab: 921832 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 639464 kB' 'KernelStack: 20960 kB' 'PageTables: 9308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029588 kB' 'Committed_AS: 14052816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318112 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.914 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.915 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.916 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 83163604 kB' 'MemUsed: 14452024 kB' 'SwapCached: 0 kB' 'Active: 8823428 kB' 'Inactive: 3343336 kB' 'Active(anon): 8605296 kB' 'Inactive(anon): 0 kB' 'Active(file): 218132 kB' 'Inactive(file): 3343336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11762780 kB' 'Mapped: 143064 kB' 'AnonPages: 407104 kB' 'Shmem: 8201312 kB' 'KernelStack: 13032 kB' 'PageTables: 6088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 187364 kB' 'Slab: 535468 kB' 'SReclaimable: 187364 kB' 'SUnreclaim: 348104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.917 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.918 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765540 kB' 'MemFree: 86226792 kB' 'MemUsed: 7538748 kB' 'SwapCached: 0 kB' 'Active: 4192220 kB' 'Inactive: 197584 kB' 'Active(anon): 3933380 kB' 'Inactive(anon): 0 kB' 'Active(file): 258840 kB' 'Inactive(file): 197584 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4135664 kB' 'Mapped: 68876 kB' 'AnonPages: 254184 kB' 'Shmem: 3679240 kB' 'KernelStack: 7848 kB' 'PageTables: 2872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95004 kB' 'Slab: 386364 kB' 'SReclaimable: 95004 kB' 'SUnreclaim: 291360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.919 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:27:15.920 node0=512 expecting 513 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:27:15.920 node1=513 expecting 512 00:27:15.920 03:23:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:27:15.920 00:27:15.920 real 0m3.036s 00:27:15.920 user 0m1.135s 00:27:15.920 sys 0m1.947s 00:27:15.921 03:23:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:15.921 03:23:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:27:15.921 ************************************ 00:27:15.921 END TEST odd_alloc 00:27:15.921 ************************************ 00:27:15.921 03:23:56 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:27:15.921 03:23:56 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:15.921 03:23:56 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:15.921 03:23:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:27:15.921 ************************************ 00:27:15.921 START TEST custom_alloc 00:27:15.921 ************************************ 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:27:15.921 03:23:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:18.454 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:18.454 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:18.454 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:18.454 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:18.454 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:18.454 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:18.454 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:18.454 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:18.454 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:18.454 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:18.454 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:18.454 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:18.454 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:18.454 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:18.454 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:18.454 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:18.454 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.454 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 168343884 kB' 'MemAvailable: 171358744 kB' 'Buffers: 4132 kB' 'Cached: 15894408 kB' 'SwapCached: 0 kB' 'Active: 13017684 kB' 'Inactive: 3540920 kB' 'Active(anon): 12540712 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663192 kB' 'Mapped: 211924 kB' 'Shmem: 11880648 kB' 'KReclaimable: 282368 kB' 'Slab: 922484 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640116 kB' 'KernelStack: 21408 kB' 'PageTables: 10308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506324 kB' 'Committed_AS: 14056296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318336 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.455 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.719 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:27:18.720 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 168348072 kB' 'MemAvailable: 171362932 kB' 'Buffers: 4132 kB' 'Cached: 15894412 kB' 'SwapCached: 0 kB' 'Active: 13016512 kB' 'Inactive: 3540920 kB' 'Active(anon): 12539540 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661960 kB' 'Mapped: 211900 kB' 'Shmem: 11880652 kB' 'KReclaimable: 282368 kB' 'Slab: 922488 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640120 kB' 'KernelStack: 20976 kB' 'PageTables: 9292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506324 kB' 'Committed_AS: 14056312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318304 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.721 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.722 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 168348056 kB' 'MemAvailable: 171362916 kB' 'Buffers: 4132 kB' 'Cached: 15894428 kB' 'SwapCached: 0 kB' 'Active: 13017112 kB' 'Inactive: 3540920 kB' 'Active(anon): 12540140 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662592 kB' 'Mapped: 211900 kB' 'Shmem: 11880668 kB' 'KReclaimable: 282368 kB' 'Slab: 922584 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640216 kB' 'KernelStack: 21088 kB' 'PageTables: 9648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506324 kB' 'Committed_AS: 14056332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318256 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.723 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.724 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:27:18.725 nr_hugepages=1536 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:27:18.725 resv_hugepages=0 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:27:18.725 surplus_hugepages=0 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:27:18.725 anon_hugepages=0 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:27:18.725 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 168346596 kB' 'MemAvailable: 171361456 kB' 'Buffers: 4132 kB' 'Cached: 15894452 kB' 'SwapCached: 0 kB' 'Active: 13017496 kB' 'Inactive: 3540920 kB' 'Active(anon): 12540524 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662936 kB' 'Mapped: 211900 kB' 'Shmem: 11880692 kB' 'KReclaimable: 282368 kB' 'Slab: 922584 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640216 kB' 'KernelStack: 21168 kB' 'PageTables: 9820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506324 kB' 'Committed_AS: 14056356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318288 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.726 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:27:18.727 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 83161716 kB' 'MemUsed: 14453912 kB' 'SwapCached: 0 kB' 'Active: 8823020 kB' 'Inactive: 3343336 kB' 'Active(anon): 8604888 kB' 'Inactive(anon): 0 kB' 'Active(file): 218132 kB' 'Inactive(file): 3343336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11762828 kB' 'Mapped: 143004 kB' 'AnonPages: 406600 kB' 'Shmem: 8201360 kB' 'KernelStack: 13400 kB' 'PageTables: 7108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 187364 kB' 'Slab: 535916 kB' 'SReclaimable: 187364 kB' 'SUnreclaim: 348552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:23:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.728 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765540 kB' 'MemFree: 85182932 kB' 'MemUsed: 8582608 kB' 'SwapCached: 0 kB' 'Active: 4194560 kB' 'Inactive: 197584 kB' 'Active(anon): 3935720 kB' 'Inactive(anon): 0 kB' 'Active(file): 258840 kB' 'Inactive(file): 197584 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4135796 kB' 'Mapped: 68896 kB' 'AnonPages: 256548 kB' 'Shmem: 3679372 kB' 'KernelStack: 7896 kB' 'PageTables: 2984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95004 kB' 'Slab: 386668 kB' 'SReclaimable: 95004 kB' 'SUnreclaim: 291664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.729 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.730 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:27:18.731 node0=512 expecting 512 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:27:18.731 node1=1024 expecting 1024 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:27:18.731 00:27:18.731 real 0m3.035s 00:27:18.731 user 0m1.223s 00:27:18.731 sys 0m1.847s 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:18.731 03:24:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:27:18.731 ************************************ 00:27:18.731 END TEST custom_alloc 00:27:18.731 ************************************ 00:27:18.731 03:24:00 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:27:18.731 03:24:00 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:18.731 03:24:00 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:18.731 03:24:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:27:18.731 ************************************ 00:27:18.731 START TEST no_shrink_alloc 00:27:18.731 ************************************ 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:27:18.731 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:27:18.990 03:24:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:22.313 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:22.313 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:22.313 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:22.313 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:22.313 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:22.313 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:22.313 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:22.313 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:22.313 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:22.313 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:22.313 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:22.313 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:22.313 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:22.313 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:22.313 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:22.313 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:22.313 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:22.313 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169306704 kB' 'MemAvailable: 172321564 kB' 'Buffers: 4132 kB' 'Cached: 15894564 kB' 'SwapCached: 0 kB' 'Active: 13023264 kB' 'Inactive: 3540920 kB' 'Active(anon): 12546292 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 668336 kB' 'Mapped: 212968 kB' 'Shmem: 11880804 kB' 'KReclaimable: 282368 kB' 'Slab: 922816 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640448 kB' 'KernelStack: 20976 kB' 'PageTables: 9412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14063676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318244 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.314 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169307216 kB' 'MemAvailable: 172322076 kB' 'Buffers: 4132 kB' 'Cached: 15894568 kB' 'SwapCached: 0 kB' 'Active: 13022332 kB' 'Inactive: 3540920 kB' 'Active(anon): 12545360 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 667884 kB' 'Mapped: 212888 kB' 'Shmem: 11880808 kB' 'KReclaimable: 282368 kB' 'Slab: 922780 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640412 kB' 'KernelStack: 20960 kB' 'PageTables: 9316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14063696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318180 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.315 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.316 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169307496 kB' 'MemAvailable: 172322356 kB' 'Buffers: 4132 kB' 'Cached: 15894584 kB' 'SwapCached: 0 kB' 'Active: 13022372 kB' 'Inactive: 3540920 kB' 'Active(anon): 12545400 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 667888 kB' 'Mapped: 212888 kB' 'Shmem: 11880824 kB' 'KReclaimable: 282368 kB' 'Slab: 922780 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640412 kB' 'KernelStack: 20960 kB' 'PageTables: 9316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14063716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318180 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.317 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.318 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:27:22.319 nr_hugepages=1024 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:27:22.319 resv_hugepages=0 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:27:22.319 surplus_hugepages=0 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:27:22.319 anon_hugepages=0 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169307244 kB' 'MemAvailable: 172322104 kB' 'Buffers: 4132 kB' 'Cached: 15894624 kB' 'SwapCached: 0 kB' 'Active: 13022052 kB' 'Inactive: 3540920 kB' 'Active(anon): 12545080 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 667496 kB' 'Mapped: 212888 kB' 'Shmem: 11880864 kB' 'KReclaimable: 282368 kB' 'Slab: 922780 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640412 kB' 'KernelStack: 20944 kB' 'PageTables: 9268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14063740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318180 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.319 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.320 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:22.321 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 82076540 kB' 'MemUsed: 15539088 kB' 'SwapCached: 0 kB' 'Active: 8828164 kB' 'Inactive: 3343336 kB' 'Active(anon): 8610032 kB' 'Inactive(anon): 0 kB' 'Active(file): 218132 kB' 'Inactive(file): 3343336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11762904 kB' 'Mapped: 143072 kB' 'AnonPages: 411748 kB' 'Shmem: 8201436 kB' 'KernelStack: 13064 kB' 'PageTables: 6244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 187364 kB' 'Slab: 536104 kB' 'SReclaimable: 187364 kB' 'SUnreclaim: 348740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.322 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:27:22.323 node0=1024 expecting 1024 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:27:22.323 03:24:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:24.861 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:24.861 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:24.861 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:24.861 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:24.861 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:24.861 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:24.861 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:24.861 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:24.861 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:24.861 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:24.861 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:24.861 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:24.861 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:24.861 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:24.861 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:24.861 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:24.861 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:24.861 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169333400 kB' 'MemAvailable: 172348260 kB' 'Buffers: 4132 kB' 'Cached: 15894700 kB' 'SwapCached: 0 kB' 'Active: 13022672 kB' 'Inactive: 3540920 kB' 'Active(anon): 12545700 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 667972 kB' 'Mapped: 212904 kB' 'Shmem: 11880940 kB' 'KReclaimable: 282368 kB' 'Slab: 923112 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640744 kB' 'KernelStack: 20944 kB' 'PageTables: 9284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14064360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318212 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.861 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.862 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169333624 kB' 'MemAvailable: 172348484 kB' 'Buffers: 4132 kB' 'Cached: 15894700 kB' 'SwapCached: 0 kB' 'Active: 13023288 kB' 'Inactive: 3540920 kB' 'Active(anon): 12546316 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 668652 kB' 'Mapped: 212892 kB' 'Shmem: 11880940 kB' 'KReclaimable: 282368 kB' 'Slab: 923156 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640788 kB' 'KernelStack: 20992 kB' 'PageTables: 9432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14066784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318228 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.863 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169333220 kB' 'MemAvailable: 172348080 kB' 'Buffers: 4132 kB' 'Cached: 15894724 kB' 'SwapCached: 0 kB' 'Active: 13023448 kB' 'Inactive: 3540920 kB' 'Active(anon): 12546476 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 668728 kB' 'Mapped: 212892 kB' 'Shmem: 11880964 kB' 'KReclaimable: 282368 kB' 'Slab: 923164 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640796 kB' 'KernelStack: 20976 kB' 'PageTables: 9372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14064404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318196 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.864 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.865 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:27:24.866 nr_hugepages=1024 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:27:24.866 resv_hugepages=0 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:27:24.866 surplus_hugepages=0 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:27:24.866 anon_hugepages=0 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:24.866 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381168 kB' 'MemFree: 169333836 kB' 'MemAvailable: 172348696 kB' 'Buffers: 4132 kB' 'Cached: 15894744 kB' 'SwapCached: 0 kB' 'Active: 13023476 kB' 'Inactive: 3540920 kB' 'Active(anon): 12546504 kB' 'Inactive(anon): 0 kB' 'Active(file): 476972 kB' 'Inactive(file): 3540920 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 668728 kB' 'Mapped: 212892 kB' 'Shmem: 11880984 kB' 'KReclaimable: 282368 kB' 'Slab: 923164 kB' 'SReclaimable: 282368 kB' 'SUnreclaim: 640796 kB' 'KernelStack: 20976 kB' 'PageTables: 9372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030612 kB' 'Committed_AS: 14064428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 318196 kB' 'VmallocChunk: 0 kB' 'Percpu: 82176 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810260 kB' 'DirectMap2M: 44103680 kB' 'DirectMap1G: 154140672 kB' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.128 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:27:25.129 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 82098964 kB' 'MemUsed: 15516664 kB' 'SwapCached: 0 kB' 'Active: 8829192 kB' 'Inactive: 3343336 kB' 'Active(anon): 8611060 kB' 'Inactive(anon): 0 kB' 'Active(file): 218132 kB' 'Inactive(file): 3343336 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11763000 kB' 'Mapped: 143068 kB' 'AnonPages: 412740 kB' 'Shmem: 8201532 kB' 'KernelStack: 13128 kB' 'PageTables: 6444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 187364 kB' 'Slab: 536328 kB' 'SReclaimable: 187364 kB' 'SUnreclaim: 348964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.130 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:27:25.131 node0=1024 expecting 1024 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:27:25.131 00:27:25.131 real 0m6.218s 00:27:25.131 user 0m2.431s 00:27:25.131 sys 0m3.809s 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:25.131 03:24:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:27:25.131 ************************************ 00:27:25.131 END TEST no_shrink_alloc 00:27:25.131 ************************************ 00:27:25.131 03:24:06 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:27:25.131 03:24:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:27:25.131 03:24:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:27:25.131 03:24:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:27:25.131 03:24:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:27:25.131 03:24:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:27:25.131 03:24:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:27:25.131 03:24:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:27:25.131 03:24:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:27:25.131 03:24:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:27:25.131 03:24:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:27:25.131 03:24:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:27:25.131 03:24:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:27:25.131 03:24:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:27:25.131 00:27:25.131 real 0m23.831s 00:27:25.131 user 0m8.809s 00:27:25.131 sys 0m13.920s 00:27:25.131 03:24:06 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:25.131 03:24:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:27:25.131 ************************************ 00:27:25.131 END TEST hugepages 00:27:25.131 ************************************ 00:27:25.131 03:24:06 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:27:25.131 03:24:06 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:25.131 03:24:06 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:25.131 03:24:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:27:25.131 ************************************ 00:27:25.131 START TEST driver 00:27:25.131 ************************************ 00:27:25.131 03:24:06 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:27:25.131 * Looking for test storage... 00:27:25.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:27:25.390 03:24:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:27:25.390 03:24:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:27:25.390 03:24:06 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:29.579 03:24:10 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:27:29.579 03:24:10 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:29.579 03:24:10 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:29.579 03:24:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:27:29.579 ************************************ 00:27:29.580 START TEST guess_driver 00:27:29.580 ************************************ 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 222 > 0 )) 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:27:29.580 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:27:29.580 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:27:29.580 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:27:29.580 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:27:29.580 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:27:29.580 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:27:29.580 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:27:29.580 Looking for driver=vfio-pci 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:27:29.580 03:24:10 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:27:32.865 03:24:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:32.865 03:24:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:32.865 03:24:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:32.865 03:24:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:32.865 03:24:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:32.865 03:24:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:32.865 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:32.866 03:24:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:34.243 03:24:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:27:34.243 03:24:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:27:34.243 03:24:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:27:34.243 03:24:15 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:27:34.243 03:24:15 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:27:34.243 03:24:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:27:34.243 03:24:15 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:39.516 00:27:39.516 real 0m9.069s 00:27:39.516 user 0m2.543s 00:27:39.516 sys 0m4.532s 00:27:39.516 03:24:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:39.516 03:24:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:27:39.516 ************************************ 00:27:39.516 END TEST guess_driver 00:27:39.516 ************************************ 00:27:39.516 00:27:39.516 real 0m13.508s 00:27:39.516 user 0m3.808s 00:27:39.516 sys 0m6.976s 00:27:39.516 03:24:19 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:39.516 03:24:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:27:39.516 ************************************ 00:27:39.516 END TEST driver 00:27:39.516 ************************************ 00:27:39.516 03:24:19 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:27:39.516 03:24:19 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:39.516 03:24:19 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:39.516 03:24:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:27:39.516 ************************************ 00:27:39.516 START TEST devices 00:27:39.516 ************************************ 00:27:39.516 03:24:20 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:27:39.516 * Looking for test storage... 00:27:39.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:27:39.516 03:24:20 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:27:39.516 03:24:20 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:27:39.516 03:24:20 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:27:39.516 03:24:20 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:27:42.061 03:24:23 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:27:42.061 03:24:23 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:27:42.061 03:24:23 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:27:42.061 03:24:23 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:27:42.061 03:24:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:27:42.061 03:24:23 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:27:42.061 03:24:23 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:42.061 03:24:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:27:42.061 03:24:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:42.061 03:24:23 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:42.061 No valid GPT data, bailing 00:27:42.061 03:24:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:42.061 03:24:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:27:42.061 03:24:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:27:42.061 03:24:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:27:42.061 03:24:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:42.061 03:24:23 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5f:00.0 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:27:42.061 03:24:23 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:27:42.061 03:24:23 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:42.061 03:24:23 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:42.061 03:24:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:27:42.061 ************************************ 00:27:42.061 START TEST nvme_mount 00:27:42.061 ************************************ 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:27:42.061 03:24:23 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:27:42.995 Creating new GPT entries in memory. 00:27:42.995 GPT data structures destroyed! You may now partition the disk using fdisk or 00:27:42.995 other utilities. 00:27:42.995 03:24:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:27:42.995 03:24:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:27:42.995 03:24:24 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:27:42.995 03:24:24 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:27:42.995 03:24:24 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:27:43.930 Creating new GPT entries in memory. 00:27:43.930 The operation has completed successfully. 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1956308 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:27:43.930 03:24:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.457 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:27:46.458 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:27:46.458 03:24:27 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:27:46.714 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:27:46.714 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:27:46.714 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:27:46.714 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:27:46.714 03:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:27:46.714 03:24:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:27:46.714 03:24:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:27:46.714 03:24:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:27:46.714 03:24:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:27:46.971 03:24:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:27:46.971 03:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:27:46.971 03:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:27:46.971 03:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:27:46.971 03:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:27:46.971 03:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:27:46.971 03:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:27:46.971 03:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:27:46.971 03:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:27:46.971 03:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:27:46.971 03:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:46.971 03:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:27:46.971 03:24:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:27:46.972 03:24:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:27:46.972 03:24:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5f:00.0 data@nvme0n1 '' '' 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:27:50.257 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:27:50.258 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:27:50.258 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:27:50.258 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:27:50.258 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:50.258 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:27:50.258 03:24:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:27:50.258 03:24:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:27:50.258 03:24:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:27:53.571 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:27:53.571 00:27:53.571 real 0m11.318s 00:27:53.571 user 0m3.328s 00:27:53.571 sys 0m5.777s 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:53.571 03:24:34 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:27:53.571 ************************************ 00:27:53.571 END TEST nvme_mount 00:27:53.571 ************************************ 00:27:53.571 03:24:34 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:27:53.571 03:24:34 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:53.571 03:24:34 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:53.571 03:24:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:27:53.571 ************************************ 00:27:53.571 START TEST dm_mount 00:27:53.571 ************************************ 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:27:53.571 03:24:34 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:27:54.507 Creating new GPT entries in memory. 00:27:54.507 GPT data structures destroyed! You may now partition the disk using fdisk or 00:27:54.507 other utilities. 00:27:54.507 03:24:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:27:54.507 03:24:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:27:54.507 03:24:35 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:27:54.507 03:24:35 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:27:54.507 03:24:35 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:27:55.443 Creating new GPT entries in memory. 00:27:55.443 The operation has completed successfully. 00:27:55.443 03:24:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:27:55.443 03:24:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:27:55.443 03:24:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:27:55.443 03:24:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:27:55.443 03:24:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:27:56.376 The operation has completed successfully. 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1960787 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5f:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:27:56.376 03:24:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5f:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:27:59.654 03:24:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:28:02.932 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.932 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:28:02.932 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:28:02.932 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.932 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.932 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.932 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.932 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.932 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.932 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:28:02.933 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:28:02.933 00:28:02.933 real 0m9.273s 00:28:02.933 user 0m2.404s 00:28:02.933 sys 0m3.922s 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:02.933 03:24:43 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:28:02.933 ************************************ 00:28:02.933 END TEST dm_mount 00:28:02.933 ************************************ 00:28:02.933 03:24:43 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:28:02.933 03:24:43 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:28:02.933 03:24:43 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:28:02.933 03:24:43 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:28:02.933 03:24:43 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:28:02.933 03:24:43 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:28:02.933 03:24:43 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:28:02.933 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:28:02.933 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:28:02.933 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:28:02.933 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:28:02.933 03:24:44 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:28:02.933 03:24:44 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:28:02.933 03:24:44 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:28:02.933 03:24:44 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:28:02.933 03:24:44 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:28:02.933 03:24:44 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:28:02.933 03:24:44 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:28:02.933 00:28:02.933 real 0m24.151s 00:28:02.933 user 0m6.838s 00:28:02.933 sys 0m11.980s 00:28:02.933 03:24:44 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:02.933 03:24:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:28:02.933 ************************************ 00:28:02.933 END TEST devices 00:28:02.933 ************************************ 00:28:02.933 00:28:02.933 real 1m22.744s 00:28:02.933 user 0m26.200s 00:28:02.933 sys 0m45.063s 00:28:02.933 03:24:44 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:02.933 03:24:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:28:02.933 ************************************ 00:28:02.933 END TEST setup.sh 00:28:02.933 ************************************ 00:28:02.933 03:24:44 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:28:05.460 Hugepages 00:28:05.460 node hugesize free / total 00:28:05.460 node0 1048576kB 0 / 0 00:28:05.460 node0 2048kB 2048 / 2048 00:28:05.460 node1 1048576kB 0 / 0 00:28:05.460 node1 2048kB 0 / 0 00:28:05.460 00:28:05.460 Type BDF Vendor Device NUMA Driver Device Block devices 00:28:05.460 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:28:05.460 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:28:05.460 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:28:05.460 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:28:05.460 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:28:05.460 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:28:05.460 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:28:05.460 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:28:05.460 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:28:05.460 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:28:05.460 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:28:05.460 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:28:05.460 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:28:05.460 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:28:05.460 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:28:05.460 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:28:05.460 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:28:05.460 03:24:46 -- spdk/autotest.sh@130 -- # uname -s 00:28:05.460 03:24:46 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:28:05.460 03:24:46 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:28:05.460 03:24:46 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:07.986 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:07.986 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:07.986 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:07.986 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:07.986 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:07.986 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:07.986 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:07.986 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:07.986 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:07.986 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:07.986 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:07.986 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:07.986 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:07.986 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:07.986 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:07.986 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:09.359 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:28:09.616 03:24:50 -- common/autotest_common.sh@1531 -- # sleep 1 00:28:10.551 03:24:51 -- common/autotest_common.sh@1532 -- # bdfs=() 00:28:10.551 03:24:51 -- common/autotest_common.sh@1532 -- # local bdfs 00:28:10.551 03:24:51 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:28:10.551 03:24:51 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:28:10.551 03:24:51 -- common/autotest_common.sh@1512 -- # bdfs=() 00:28:10.551 03:24:51 -- common/autotest_common.sh@1512 -- # local bdfs 00:28:10.551 03:24:51 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:10.551 03:24:51 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:10.551 03:24:51 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:28:10.808 03:24:51 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:28:10.808 03:24:51 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:5f:00.0 00:28:10.808 03:24:51 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:13.333 Waiting for block devices as requested 00:28:13.333 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:28:13.333 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:13.333 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:13.333 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:13.333 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:13.590 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:13.590 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:13.590 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:13.847 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:13.847 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:13.847 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:13.847 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:14.105 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:14.105 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:14.105 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:14.363 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:14.363 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:14.363 03:24:55 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:28:14.363 03:24:55 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:28:14.363 03:24:55 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 00:28:14.363 03:24:55 -- common/autotest_common.sh@1501 -- # grep 0000:5f:00.0/nvme/nvme 00:28:14.363 03:24:55 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:28:14.363 03:24:55 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:28:14.363 03:24:55 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:28:14.363 03:24:55 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:28:14.363 03:24:55 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:28:14.363 03:24:55 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:28:14.363 03:24:55 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:28:14.363 03:24:55 -- common/autotest_common.sh@1544 -- # grep oacs 00:28:14.363 03:24:55 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:28:14.363 03:24:55 -- common/autotest_common.sh@1544 -- # oacs=' 0xe' 00:28:14.363 03:24:55 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:28:14.363 03:24:55 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:28:14.363 03:24:55 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:28:14.363 03:24:55 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:28:14.363 03:24:55 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:28:14.363 03:24:55 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:28:14.363 03:24:55 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:28:14.363 03:24:55 -- common/autotest_common.sh@1556 -- # continue 00:28:14.363 03:24:55 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:28:14.363 03:24:55 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:14.363 03:24:55 -- common/autotest_common.sh@10 -- # set +x 00:28:14.363 03:24:55 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:28:14.363 03:24:55 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:14.363 03:24:55 -- common/autotest_common.sh@10 -- # set +x 00:28:14.363 03:24:55 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:17.700 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:17.700 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:17.700 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:17.700 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:17.700 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:17.700 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:17.700 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:17.700 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:17.700 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:17.700 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:17.700 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:17.700 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:17.700 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:17.700 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:17.700 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:17.700 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:19.077 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:28:19.077 03:25:00 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:28:19.077 03:25:00 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:19.077 03:25:00 -- common/autotest_common.sh@10 -- # set +x 00:28:19.077 03:25:00 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:28:19.077 03:25:00 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:28:19.077 03:25:00 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:28:19.077 03:25:00 -- common/autotest_common.sh@1576 -- # bdfs=() 00:28:19.077 03:25:00 -- common/autotest_common.sh@1576 -- # local bdfs 00:28:19.077 03:25:00 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:28:19.077 03:25:00 -- common/autotest_common.sh@1512 -- # bdfs=() 00:28:19.077 03:25:00 -- common/autotest_common.sh@1512 -- # local bdfs 00:28:19.077 03:25:00 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:19.077 03:25:00 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:19.077 03:25:00 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:28:19.337 03:25:00 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:28:19.337 03:25:00 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:5f:00.0 00:28:19.337 03:25:00 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:28:19.337 03:25:00 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:28:19.337 03:25:00 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:28:19.337 03:25:00 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:28:19.337 03:25:00 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:28:19.337 03:25:00 -- common/autotest_common.sh@1585 -- # printf '%s\n' 0000:5f:00.0 00:28:19.337 03:25:00 -- common/autotest_common.sh@1591 -- # [[ -z 0000:5f:00.0 ]] 00:28:19.337 03:25:00 -- common/autotest_common.sh@1596 -- # spdk_tgt_pid=1970393 00:28:19.337 03:25:00 -- common/autotest_common.sh@1597 -- # waitforlisten 1970393 00:28:19.337 03:25:00 -- common/autotest_common.sh@1595 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:19.337 03:25:00 -- common/autotest_common.sh@830 -- # '[' -z 1970393 ']' 00:28:19.337 03:25:00 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.337 03:25:00 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:19.337 03:25:00 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.337 03:25:00 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:19.337 03:25:00 -- common/autotest_common.sh@10 -- # set +x 00:28:19.337 [2024-06-11 03:25:00.552048] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:28:19.337 [2024-06-11 03:25:00.552092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1970393 ] 00:28:19.337 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.337 [2024-06-11 03:25:00.613142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.337 [2024-06-11 03:25:00.652757] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.596 03:25:00 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:19.596 03:25:00 -- common/autotest_common.sh@863 -- # return 0 00:28:19.596 03:25:00 -- common/autotest_common.sh@1599 -- # bdf_id=0 00:28:19.596 03:25:00 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:28:19.596 03:25:00 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5f:00.0 00:28:22.944 nvme0n1 00:28:22.944 03:25:03 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:28:22.944 [2024-06-11 03:25:03.982775] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:28:22.944 request: 00:28:22.944 { 00:28:22.944 "nvme_ctrlr_name": "nvme0", 00:28:22.944 "password": "test", 00:28:22.944 "method": "bdev_nvme_opal_revert", 00:28:22.944 "req_id": 1 00:28:22.944 } 00:28:22.944 Got JSON-RPC error response 00:28:22.944 response: 00:28:22.944 { 00:28:22.944 "code": -32602, 00:28:22.944 "message": "Invalid parameters" 00:28:22.944 } 00:28:22.944 03:25:04 -- common/autotest_common.sh@1603 -- # true 00:28:22.944 03:25:04 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:28:22.944 03:25:04 -- common/autotest_common.sh@1607 -- # killprocess 1970393 00:28:22.944 03:25:04 -- common/autotest_common.sh@949 -- # '[' -z 1970393 ']' 00:28:22.944 03:25:04 -- common/autotest_common.sh@953 -- # kill -0 1970393 00:28:22.944 03:25:04 -- common/autotest_common.sh@954 -- # uname 00:28:22.944 03:25:04 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:22.944 03:25:04 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1970393 00:28:22.944 03:25:04 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:22.944 03:25:04 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:22.944 03:25:04 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1970393' 00:28:22.944 killing process with pid 1970393 00:28:22.944 03:25:04 -- common/autotest_common.sh@968 -- # kill 1970393 00:28:22.944 03:25:04 -- common/autotest_common.sh@973 -- # wait 1970393 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.944 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:22.945 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:28:24.847 03:25:06 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:28:24.847 03:25:06 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:28:24.847 03:25:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:28:24.847 03:25:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:28:24.847 03:25:06 -- spdk/autotest.sh@162 -- # timing_enter lib 00:28:24.847 03:25:06 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:24.847 03:25:06 -- common/autotest_common.sh@10 -- # set +x 00:28:24.847 03:25:06 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:28:24.847 03:25:06 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:28:24.847 03:25:06 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:24.847 03:25:06 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:24.847 03:25:06 -- common/autotest_common.sh@10 -- # set +x 00:28:24.847 ************************************ 00:28:24.847 START TEST env 00:28:24.847 ************************************ 00:28:24.847 03:25:06 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:28:25.106 * Looking for test storage... 00:28:25.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:28:25.106 03:25:06 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:28:25.106 03:25:06 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:25.106 03:25:06 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:25.106 03:25:06 env -- common/autotest_common.sh@10 -- # set +x 00:28:25.106 ************************************ 00:28:25.106 START TEST env_memory 00:28:25.106 ************************************ 00:28:25.106 03:25:06 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:28:25.106 00:28:25.106 00:28:25.106 CUnit - A unit testing framework for C - Version 2.1-3 00:28:25.106 http://cunit.sourceforge.net/ 00:28:25.106 00:28:25.106 00:28:25.106 Suite: memory 00:28:25.106 Test: alloc and free memory map ...[2024-06-11 03:25:06.406253] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:28:25.106 passed 00:28:25.106 Test: mem map translation ...[2024-06-11 03:25:06.423923] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:28:25.106 [2024-06-11 03:25:06.423938] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:28:25.106 [2024-06-11 03:25:06.423972] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:28:25.107 [2024-06-11 03:25:06.423978] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:28:25.107 passed 00:28:25.107 Test: mem map registration ...[2024-06-11 03:25:06.459500] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:28:25.107 [2024-06-11 03:25:06.459516] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:28:25.107 passed 00:28:25.107 Test: mem map adjacent registrations ...passed 00:28:25.107 00:28:25.107 Run Summary: Type Total Ran Passed Failed Inactive 00:28:25.107 suites 1 1 n/a 0 0 00:28:25.107 tests 4 4 4 0 0 00:28:25.107 asserts 152 152 152 0 n/a 00:28:25.107 00:28:25.107 Elapsed time = 0.133 seconds 00:28:25.366 00:28:25.366 real 0m0.146s 00:28:25.366 user 0m0.138s 00:28:25.366 sys 0m0.007s 00:28:25.366 03:25:06 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:25.366 03:25:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:28:25.366 ************************************ 00:28:25.366 END TEST env_memory 00:28:25.366 ************************************ 00:28:25.366 03:25:06 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:28:25.366 03:25:06 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:25.366 03:25:06 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:25.366 03:25:06 env -- common/autotest_common.sh@10 -- # set +x 00:28:25.366 ************************************ 00:28:25.366 START TEST env_vtophys 00:28:25.366 ************************************ 00:28:25.366 03:25:06 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:28:25.366 EAL: lib.eal log level changed from notice to debug 00:28:25.366 EAL: Detected lcore 0 as core 0 on socket 0 00:28:25.366 EAL: Detected lcore 1 as core 1 on socket 0 00:28:25.366 EAL: Detected lcore 2 as core 2 on socket 0 00:28:25.366 EAL: Detected lcore 3 as core 3 on socket 0 00:28:25.366 EAL: Detected lcore 4 as core 4 on socket 0 00:28:25.366 EAL: Detected lcore 5 as core 5 on socket 0 00:28:25.366 EAL: Detected lcore 6 as core 6 on socket 0 00:28:25.366 EAL: Detected lcore 7 as core 9 on socket 0 00:28:25.366 EAL: Detected lcore 8 as core 10 on socket 0 00:28:25.366 EAL: Detected lcore 9 as core 11 on socket 0 00:28:25.366 EAL: Detected lcore 10 as core 12 on socket 0 00:28:25.366 EAL: Detected lcore 11 as core 13 on socket 0 00:28:25.366 EAL: Detected lcore 12 as core 16 on socket 0 00:28:25.366 EAL: Detected lcore 13 as core 17 on socket 0 00:28:25.366 EAL: Detected lcore 14 as core 18 on socket 0 00:28:25.366 EAL: Detected lcore 15 as core 19 on socket 0 00:28:25.366 EAL: Detected lcore 16 as core 20 on socket 0 00:28:25.366 EAL: Detected lcore 17 as core 21 on socket 0 00:28:25.366 EAL: Detected lcore 18 as core 24 on socket 0 00:28:25.366 EAL: Detected lcore 19 as core 25 on socket 0 00:28:25.366 EAL: Detected lcore 20 as core 26 on socket 0 00:28:25.366 EAL: Detected lcore 21 as core 27 on socket 0 00:28:25.366 EAL: Detected lcore 22 as core 28 on socket 0 00:28:25.366 EAL: Detected lcore 23 as core 29 on socket 0 00:28:25.366 EAL: Detected lcore 24 as core 0 on socket 1 00:28:25.366 EAL: Detected lcore 25 as core 1 on socket 1 00:28:25.366 EAL: Detected lcore 26 as core 2 on socket 1 00:28:25.366 EAL: Detected lcore 27 as core 3 on socket 1 00:28:25.366 EAL: Detected lcore 28 as core 4 on socket 1 00:28:25.366 EAL: Detected lcore 29 as core 5 on socket 1 00:28:25.366 EAL: Detected lcore 30 as core 6 on socket 1 00:28:25.366 EAL: Detected lcore 31 as core 8 on socket 1 00:28:25.366 EAL: Detected lcore 32 as core 9 on socket 1 00:28:25.366 EAL: Detected lcore 33 as core 10 on socket 1 00:28:25.366 EAL: Detected lcore 34 as core 11 on socket 1 00:28:25.366 EAL: Detected lcore 35 as core 12 on socket 1 00:28:25.366 EAL: Detected lcore 36 as core 13 on socket 1 00:28:25.366 EAL: Detected lcore 37 as core 16 on socket 1 00:28:25.366 EAL: Detected lcore 38 as core 17 on socket 1 00:28:25.366 EAL: Detected lcore 39 as core 18 on socket 1 00:28:25.366 EAL: Detected lcore 40 as core 19 on socket 1 00:28:25.366 EAL: Detected lcore 41 as core 20 on socket 1 00:28:25.366 EAL: Detected lcore 42 as core 21 on socket 1 00:28:25.366 EAL: Detected lcore 43 as core 25 on socket 1 00:28:25.366 EAL: Detected lcore 44 as core 26 on socket 1 00:28:25.366 EAL: Detected lcore 45 as core 27 on socket 1 00:28:25.366 EAL: Detected lcore 46 as core 28 on socket 1 00:28:25.366 EAL: Detected lcore 47 as core 29 on socket 1 00:28:25.367 EAL: Detected lcore 48 as core 0 on socket 0 00:28:25.367 EAL: Detected lcore 49 as core 1 on socket 0 00:28:25.367 EAL: Detected lcore 50 as core 2 on socket 0 00:28:25.367 EAL: Detected lcore 51 as core 3 on socket 0 00:28:25.367 EAL: Detected lcore 52 as core 4 on socket 0 00:28:25.367 EAL: Detected lcore 53 as core 5 on socket 0 00:28:25.367 EAL: Detected lcore 54 as core 6 on socket 0 00:28:25.367 EAL: Detected lcore 55 as core 9 on socket 0 00:28:25.367 EAL: Detected lcore 56 as core 10 on socket 0 00:28:25.367 EAL: Detected lcore 57 as core 11 on socket 0 00:28:25.367 EAL: Detected lcore 58 as core 12 on socket 0 00:28:25.367 EAL: Detected lcore 59 as core 13 on socket 0 00:28:25.367 EAL: Detected lcore 60 as core 16 on socket 0 00:28:25.367 EAL: Detected lcore 61 as core 17 on socket 0 00:28:25.367 EAL: Detected lcore 62 as core 18 on socket 0 00:28:25.367 EAL: Detected lcore 63 as core 19 on socket 0 00:28:25.367 EAL: Detected lcore 64 as core 20 on socket 0 00:28:25.367 EAL: Detected lcore 65 as core 21 on socket 0 00:28:25.367 EAL: Detected lcore 66 as core 24 on socket 0 00:28:25.367 EAL: Detected lcore 67 as core 25 on socket 0 00:28:25.367 EAL: Detected lcore 68 as core 26 on socket 0 00:28:25.367 EAL: Detected lcore 69 as core 27 on socket 0 00:28:25.367 EAL: Detected lcore 70 as core 28 on socket 0 00:28:25.367 EAL: Detected lcore 71 as core 29 on socket 0 00:28:25.367 EAL: Detected lcore 72 as core 0 on socket 1 00:28:25.367 EAL: Detected lcore 73 as core 1 on socket 1 00:28:25.367 EAL: Detected lcore 74 as core 2 on socket 1 00:28:25.367 EAL: Detected lcore 75 as core 3 on socket 1 00:28:25.367 EAL: Detected lcore 76 as core 4 on socket 1 00:28:25.367 EAL: Detected lcore 77 as core 5 on socket 1 00:28:25.367 EAL: Detected lcore 78 as core 6 on socket 1 00:28:25.367 EAL: Detected lcore 79 as core 8 on socket 1 00:28:25.367 EAL: Detected lcore 80 as core 9 on socket 1 00:28:25.367 EAL: Detected lcore 81 as core 10 on socket 1 00:28:25.367 EAL: Detected lcore 82 as core 11 on socket 1 00:28:25.367 EAL: Detected lcore 83 as core 12 on socket 1 00:28:25.367 EAL: Detected lcore 84 as core 13 on socket 1 00:28:25.367 EAL: Detected lcore 85 as core 16 on socket 1 00:28:25.367 EAL: Detected lcore 86 as core 17 on socket 1 00:28:25.367 EAL: Detected lcore 87 as core 18 on socket 1 00:28:25.367 EAL: Detected lcore 88 as core 19 on socket 1 00:28:25.367 EAL: Detected lcore 89 as core 20 on socket 1 00:28:25.367 EAL: Detected lcore 90 as core 21 on socket 1 00:28:25.367 EAL: Detected lcore 91 as core 25 on socket 1 00:28:25.367 EAL: Detected lcore 92 as core 26 on socket 1 00:28:25.367 EAL: Detected lcore 93 as core 27 on socket 1 00:28:25.367 EAL: Detected lcore 94 as core 28 on socket 1 00:28:25.367 EAL: Detected lcore 95 as core 29 on socket 1 00:28:25.367 EAL: Maximum logical cores by configuration: 128 00:28:25.367 EAL: Detected CPU lcores: 96 00:28:25.367 EAL: Detected NUMA nodes: 2 00:28:25.367 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:28:25.367 EAL: Detected shared linkage of DPDK 00:28:25.367 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:28:25.367 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:28:25.367 EAL: Registered [vdev] bus. 00:28:25.367 EAL: bus.vdev log level changed from disabled to notice 00:28:25.367 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:28:25.367 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:28:25.367 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:28:25.367 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:28:25.367 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:28:25.367 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:28:25.367 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:28:25.367 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:28:25.367 EAL: No shared files mode enabled, IPC will be disabled 00:28:25.367 EAL: No shared files mode enabled, IPC is disabled 00:28:25.367 EAL: Bus pci wants IOVA as 'DC' 00:28:25.367 EAL: Bus vdev wants IOVA as 'DC' 00:28:25.367 EAL: Buses did not request a specific IOVA mode. 00:28:25.367 EAL: IOMMU is available, selecting IOVA as VA mode. 00:28:25.367 EAL: Selected IOVA mode 'VA' 00:28:25.367 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.367 EAL: Probing VFIO support... 00:28:25.367 EAL: IOMMU type 1 (Type 1) is supported 00:28:25.367 EAL: IOMMU type 7 (sPAPR) is not supported 00:28:25.367 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:28:25.367 EAL: VFIO support initialized 00:28:25.367 EAL: Ask a virtual area of 0x2e000 bytes 00:28:25.367 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:28:25.367 EAL: Setting up physically contiguous memory... 00:28:25.367 EAL: Setting maximum number of open files to 524288 00:28:25.367 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:28:25.367 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:28:25.367 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:28:25.367 EAL: Ask a virtual area of 0x61000 bytes 00:28:25.367 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:28:25.367 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:28:25.367 EAL: Ask a virtual area of 0x400000000 bytes 00:28:25.367 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:28:25.367 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:28:25.367 EAL: Ask a virtual area of 0x61000 bytes 00:28:25.367 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:28:25.367 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:28:25.367 EAL: Ask a virtual area of 0x400000000 bytes 00:28:25.367 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:28:25.367 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:28:25.367 EAL: Ask a virtual area of 0x61000 bytes 00:28:25.367 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:28:25.367 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:28:25.367 EAL: Ask a virtual area of 0x400000000 bytes 00:28:25.367 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:28:25.367 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:28:25.367 EAL: Ask a virtual area of 0x61000 bytes 00:28:25.367 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:28:25.367 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:28:25.367 EAL: Ask a virtual area of 0x400000000 bytes 00:28:25.367 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:28:25.367 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:28:25.367 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:28:25.367 EAL: Ask a virtual area of 0x61000 bytes 00:28:25.367 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:28:25.367 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:28:25.367 EAL: Ask a virtual area of 0x400000000 bytes 00:28:25.367 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:28:25.367 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:28:25.367 EAL: Ask a virtual area of 0x61000 bytes 00:28:25.367 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:28:25.367 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:28:25.367 EAL: Ask a virtual area of 0x400000000 bytes 00:28:25.367 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:28:25.367 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:28:25.367 EAL: Ask a virtual area of 0x61000 bytes 00:28:25.367 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:28:25.367 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:28:25.367 EAL: Ask a virtual area of 0x400000000 bytes 00:28:25.367 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:28:25.367 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:28:25.367 EAL: Ask a virtual area of 0x61000 bytes 00:28:25.367 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:28:25.367 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:28:25.367 EAL: Ask a virtual area of 0x400000000 bytes 00:28:25.367 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:28:25.367 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:28:25.367 EAL: Hugepages will be freed exactly as allocated. 00:28:25.367 EAL: No shared files mode enabled, IPC is disabled 00:28:25.367 EAL: No shared files mode enabled, IPC is disabled 00:28:25.367 EAL: TSC frequency is ~2100000 KHz 00:28:25.367 EAL: Main lcore 0 is ready (tid=7fda8ac1ca00;cpuset=[0]) 00:28:25.367 EAL: Trying to obtain current memory policy. 00:28:25.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:25.367 EAL: Restoring previous memory policy: 0 00:28:25.367 EAL: request: mp_malloc_sync 00:28:25.367 EAL: No shared files mode enabled, IPC is disabled 00:28:25.367 EAL: Heap on socket 0 was expanded by 2MB 00:28:25.367 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:28:25.367 EAL: probe driver: 8086:37d2 net_i40e 00:28:25.367 EAL: Not managed by a supported kernel driver, skipped 00:28:25.367 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:28:25.367 EAL: probe driver: 8086:37d2 net_i40e 00:28:25.367 EAL: Not managed by a supported kernel driver, skipped 00:28:25.367 EAL: No shared files mode enabled, IPC is disabled 00:28:25.367 EAL: No shared files mode enabled, IPC is disabled 00:28:25.367 EAL: No PCI address specified using 'addr=' in: bus=pci 00:28:25.367 EAL: Mem event callback 'spdk:(nil)' registered 00:28:25.367 00:28:25.367 00:28:25.367 CUnit - A unit testing framework for C - Version 2.1-3 00:28:25.367 http://cunit.sourceforge.net/ 00:28:25.367 00:28:25.367 00:28:25.367 Suite: components_suite 00:28:25.367 Test: vtophys_malloc_test ...passed 00:28:25.367 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:28:25.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:25.367 EAL: Restoring previous memory policy: 4 00:28:25.367 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.367 EAL: request: mp_malloc_sync 00:28:25.367 EAL: No shared files mode enabled, IPC is disabled 00:28:25.367 EAL: Heap on socket 0 was expanded by 4MB 00:28:25.367 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.367 EAL: request: mp_malloc_sync 00:28:25.367 EAL: No shared files mode enabled, IPC is disabled 00:28:25.367 EAL: Heap on socket 0 was shrunk by 4MB 00:28:25.368 EAL: Trying to obtain current memory policy. 00:28:25.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:25.368 EAL: Restoring previous memory policy: 4 00:28:25.368 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.368 EAL: request: mp_malloc_sync 00:28:25.368 EAL: No shared files mode enabled, IPC is disabled 00:28:25.368 EAL: Heap on socket 0 was expanded by 6MB 00:28:25.368 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.368 EAL: request: mp_malloc_sync 00:28:25.368 EAL: No shared files mode enabled, IPC is disabled 00:28:25.368 EAL: Heap on socket 0 was shrunk by 6MB 00:28:25.368 EAL: Trying to obtain current memory policy. 00:28:25.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:25.368 EAL: Restoring previous memory policy: 4 00:28:25.368 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.368 EAL: request: mp_malloc_sync 00:28:25.368 EAL: No shared files mode enabled, IPC is disabled 00:28:25.368 EAL: Heap on socket 0 was expanded by 10MB 00:28:25.368 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.368 EAL: request: mp_malloc_sync 00:28:25.368 EAL: No shared files mode enabled, IPC is disabled 00:28:25.368 EAL: Heap on socket 0 was shrunk by 10MB 00:28:25.368 EAL: Trying to obtain current memory policy. 00:28:25.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:25.368 EAL: Restoring previous memory policy: 4 00:28:25.368 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.368 EAL: request: mp_malloc_sync 00:28:25.368 EAL: No shared files mode enabled, IPC is disabled 00:28:25.368 EAL: Heap on socket 0 was expanded by 18MB 00:28:25.368 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.368 EAL: request: mp_malloc_sync 00:28:25.368 EAL: No shared files mode enabled, IPC is disabled 00:28:25.368 EAL: Heap on socket 0 was shrunk by 18MB 00:28:25.368 EAL: Trying to obtain current memory policy. 00:28:25.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:25.368 EAL: Restoring previous memory policy: 4 00:28:25.368 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.368 EAL: request: mp_malloc_sync 00:28:25.368 EAL: No shared files mode enabled, IPC is disabled 00:28:25.368 EAL: Heap on socket 0 was expanded by 34MB 00:28:25.368 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.368 EAL: request: mp_malloc_sync 00:28:25.368 EAL: No shared files mode enabled, IPC is disabled 00:28:25.368 EAL: Heap on socket 0 was shrunk by 34MB 00:28:25.368 EAL: Trying to obtain current memory policy. 00:28:25.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:25.368 EAL: Restoring previous memory policy: 4 00:28:25.368 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.368 EAL: request: mp_malloc_sync 00:28:25.368 EAL: No shared files mode enabled, IPC is disabled 00:28:25.368 EAL: Heap on socket 0 was expanded by 66MB 00:28:25.368 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.368 EAL: request: mp_malloc_sync 00:28:25.368 EAL: No shared files mode enabled, IPC is disabled 00:28:25.368 EAL: Heap on socket 0 was shrunk by 66MB 00:28:25.368 EAL: Trying to obtain current memory policy. 00:28:25.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:25.368 EAL: Restoring previous memory policy: 4 00:28:25.368 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.368 EAL: request: mp_malloc_sync 00:28:25.368 EAL: No shared files mode enabled, IPC is disabled 00:28:25.368 EAL: Heap on socket 0 was expanded by 130MB 00:28:25.368 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.368 EAL: request: mp_malloc_sync 00:28:25.368 EAL: No shared files mode enabled, IPC is disabled 00:28:25.368 EAL: Heap on socket 0 was shrunk by 130MB 00:28:25.368 EAL: Trying to obtain current memory policy. 00:28:25.368 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:25.626 EAL: Restoring previous memory policy: 4 00:28:25.626 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.626 EAL: request: mp_malloc_sync 00:28:25.626 EAL: No shared files mode enabled, IPC is disabled 00:28:25.626 EAL: Heap on socket 0 was expanded by 258MB 00:28:25.626 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.626 EAL: request: mp_malloc_sync 00:28:25.626 EAL: No shared files mode enabled, IPC is disabled 00:28:25.626 EAL: Heap on socket 0 was shrunk by 258MB 00:28:25.626 EAL: Trying to obtain current memory policy. 00:28:25.626 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:25.626 EAL: Restoring previous memory policy: 4 00:28:25.626 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.626 EAL: request: mp_malloc_sync 00:28:25.626 EAL: No shared files mode enabled, IPC is disabled 00:28:25.626 EAL: Heap on socket 0 was expanded by 514MB 00:28:25.884 EAL: Calling mem event callback 'spdk:(nil)' 00:28:25.884 EAL: request: mp_malloc_sync 00:28:25.884 EAL: No shared files mode enabled, IPC is disabled 00:28:25.884 EAL: Heap on socket 0 was shrunk by 514MB 00:28:25.884 EAL: Trying to obtain current memory policy. 00:28:25.884 EAL: Setting policy MPOL_PREFERRED for socket 0 00:28:26.143 EAL: Restoring previous memory policy: 4 00:28:26.143 EAL: Calling mem event callback 'spdk:(nil)' 00:28:26.143 EAL: request: mp_malloc_sync 00:28:26.143 EAL: No shared files mode enabled, IPC is disabled 00:28:26.143 EAL: Heap on socket 0 was expanded by 1026MB 00:28:26.143 EAL: Calling mem event callback 'spdk:(nil)' 00:28:26.401 EAL: request: mp_malloc_sync 00:28:26.401 EAL: No shared files mode enabled, IPC is disabled 00:28:26.401 EAL: Heap on socket 0 was shrunk by 1026MB 00:28:26.401 passed 00:28:26.401 00:28:26.401 Run Summary: Type Total Ran Passed Failed Inactive 00:28:26.401 suites 1 1 n/a 0 0 00:28:26.401 tests 2 2 2 0 0 00:28:26.401 asserts 497 497 497 0 n/a 00:28:26.401 00:28:26.401 Elapsed time = 0.965 seconds 00:28:26.401 EAL: Calling mem event callback 'spdk:(nil)' 00:28:26.401 EAL: request: mp_malloc_sync 00:28:26.401 EAL: No shared files mode enabled, IPC is disabled 00:28:26.401 EAL: Heap on socket 0 was shrunk by 2MB 00:28:26.401 EAL: No shared files mode enabled, IPC is disabled 00:28:26.401 EAL: No shared files mode enabled, IPC is disabled 00:28:26.401 EAL: No shared files mode enabled, IPC is disabled 00:28:26.401 00:28:26.401 real 0m1.085s 00:28:26.401 user 0m0.622s 00:28:26.401 sys 0m0.429s 00:28:26.401 03:25:07 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:26.401 03:25:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:28:26.401 ************************************ 00:28:26.401 END TEST env_vtophys 00:28:26.401 ************************************ 00:28:26.401 03:25:07 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:28:26.401 03:25:07 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:26.401 03:25:07 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:26.401 03:25:07 env -- common/autotest_common.sh@10 -- # set +x 00:28:26.401 ************************************ 00:28:26.401 START TEST env_pci 00:28:26.401 ************************************ 00:28:26.401 03:25:07 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:28:26.401 00:28:26.401 00:28:26.401 CUnit - A unit testing framework for C - Version 2.1-3 00:28:26.401 http://cunit.sourceforge.net/ 00:28:26.401 00:28:26.401 00:28:26.401 Suite: pci 00:28:26.401 Test: pci_hook ...[2024-06-11 03:25:07.737196] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1971705 has claimed it 00:28:26.401 EAL: Cannot find device (10000:00:01.0) 00:28:26.401 EAL: Failed to attach device on primary process 00:28:26.401 passed 00:28:26.401 00:28:26.402 Run Summary: Type Total Ran Passed Failed Inactive 00:28:26.402 suites 1 1 n/a 0 0 00:28:26.402 tests 1 1 1 0 0 00:28:26.402 asserts 25 25 25 0 n/a 00:28:26.402 00:28:26.402 Elapsed time = 0.031 seconds 00:28:26.402 00:28:26.402 real 0m0.048s 00:28:26.402 user 0m0.016s 00:28:26.402 sys 0m0.032s 00:28:26.402 03:25:07 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:26.402 03:25:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:28:26.402 ************************************ 00:28:26.402 END TEST env_pci 00:28:26.402 ************************************ 00:28:26.402 03:25:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:28:26.402 03:25:07 env -- env/env.sh@15 -- # uname 00:28:26.660 03:25:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:28:26.660 03:25:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:28:26.660 03:25:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:28:26.660 03:25:07 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:28:26.660 03:25:07 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:26.660 03:25:07 env -- common/autotest_common.sh@10 -- # set +x 00:28:26.660 ************************************ 00:28:26.660 START TEST env_dpdk_post_init 00:28:26.660 ************************************ 00:28:26.660 03:25:07 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:28:26.660 EAL: Detected CPU lcores: 96 00:28:26.660 EAL: Detected NUMA nodes: 2 00:28:26.660 EAL: Detected shared linkage of DPDK 00:28:26.660 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:28:26.660 EAL: Selected IOVA mode 'VA' 00:28:26.660 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.660 EAL: VFIO support initialized 00:28:26.660 TELEMETRY: No legacy callbacks, legacy socket not created 00:28:26.660 EAL: Using IOMMU type 1 (Type 1) 00:28:26.660 EAL: Ignore mapping IO port bar(1) 00:28:26.660 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:28:26.660 EAL: Ignore mapping IO port bar(1) 00:28:26.660 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:28:26.660 EAL: Ignore mapping IO port bar(1) 00:28:26.661 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:28:26.661 EAL: Ignore mapping IO port bar(1) 00:28:26.661 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:28:26.661 EAL: Ignore mapping IO port bar(1) 00:28:26.661 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:28:26.661 EAL: Ignore mapping IO port bar(1) 00:28:26.661 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:28:26.661 EAL: Ignore mapping IO port bar(1) 00:28:26.661 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:28:26.661 EAL: Ignore mapping IO port bar(1) 00:28:26.661 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:28:27.597 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:28:27.597 EAL: Ignore mapping IO port bar(1) 00:28:27.597 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:28:27.597 EAL: Ignore mapping IO port bar(1) 00:28:27.597 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:28:27.597 EAL: Ignore mapping IO port bar(1) 00:28:27.597 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:28:27.597 EAL: Ignore mapping IO port bar(1) 00:28:27.597 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:28:27.597 EAL: Ignore mapping IO port bar(1) 00:28:27.597 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:28:27.597 EAL: Ignore mapping IO port bar(1) 00:28:27.597 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:28:27.597 EAL: Ignore mapping IO port bar(1) 00:28:27.597 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:28:27.597 EAL: Ignore mapping IO port bar(1) 00:28:27.597 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:28:31.782 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:28:31.782 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001020000 00:28:31.782 Starting DPDK initialization... 00:28:31.782 Starting SPDK post initialization... 00:28:31.782 SPDK NVMe probe 00:28:31.782 Attaching to 0000:5f:00.0 00:28:31.782 Attached to 0000:5f:00.0 00:28:31.782 Cleaning up... 00:28:31.782 00:28:31.782 real 0m4.918s 00:28:31.782 user 0m3.818s 00:28:31.782 sys 0m0.170s 00:28:31.782 03:25:12 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:31.782 03:25:12 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:28:31.782 ************************************ 00:28:31.782 END TEST env_dpdk_post_init 00:28:31.782 ************************************ 00:28:31.782 03:25:12 env -- env/env.sh@26 -- # uname 00:28:31.782 03:25:12 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:28:31.782 03:25:12 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:28:31.782 03:25:12 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:31.782 03:25:12 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:31.782 03:25:12 env -- common/autotest_common.sh@10 -- # set +x 00:28:31.782 ************************************ 00:28:31.782 START TEST env_mem_callbacks 00:28:31.782 ************************************ 00:28:31.782 03:25:12 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:28:31.782 EAL: Detected CPU lcores: 96 00:28:31.782 EAL: Detected NUMA nodes: 2 00:28:31.782 EAL: Detected shared linkage of DPDK 00:28:31.782 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:28:31.782 EAL: Selected IOVA mode 'VA' 00:28:31.782 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.782 EAL: VFIO support initialized 00:28:31.782 TELEMETRY: No legacy callbacks, legacy socket not created 00:28:31.782 00:28:31.782 00:28:31.782 CUnit - A unit testing framework for C - Version 2.1-3 00:28:31.782 http://cunit.sourceforge.net/ 00:28:31.782 00:28:31.782 00:28:31.782 Suite: memory 00:28:31.782 Test: test ... 00:28:31.782 register 0x200000200000 2097152 00:28:31.782 malloc 3145728 00:28:31.782 register 0x200000400000 4194304 00:28:31.782 buf 0x200000500000 len 3145728 PASSED 00:28:31.782 malloc 64 00:28:31.782 buf 0x2000004fff40 len 64 PASSED 00:28:31.782 malloc 4194304 00:28:31.782 register 0x200000800000 6291456 00:28:31.782 buf 0x200000a00000 len 4194304 PASSED 00:28:31.782 free 0x200000500000 3145728 00:28:31.782 free 0x2000004fff40 64 00:28:31.782 unregister 0x200000400000 4194304 PASSED 00:28:31.782 free 0x200000a00000 4194304 00:28:31.782 unregister 0x200000800000 6291456 PASSED 00:28:31.782 malloc 8388608 00:28:31.782 register 0x200000400000 10485760 00:28:31.782 buf 0x200000600000 len 8388608 PASSED 00:28:31.782 free 0x200000600000 8388608 00:28:31.782 unregister 0x200000400000 10485760 PASSED 00:28:31.782 passed 00:28:31.782 00:28:31.782 Run Summary: Type Total Ran Passed Failed Inactive 00:28:31.782 suites 1 1 n/a 0 0 00:28:31.782 tests 1 1 1 0 0 00:28:31.782 asserts 15 15 15 0 n/a 00:28:31.782 00:28:31.782 Elapsed time = 0.005 seconds 00:28:31.782 00:28:31.782 real 0m0.054s 00:28:31.782 user 0m0.016s 00:28:31.782 sys 0m0.037s 00:28:31.782 03:25:12 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:31.782 03:25:12 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:28:31.782 ************************************ 00:28:31.782 END TEST env_mem_callbacks 00:28:31.782 ************************************ 00:28:31.782 00:28:31.782 real 0m6.675s 00:28:31.782 user 0m4.781s 00:28:31.782 sys 0m0.961s 00:28:31.782 03:25:12 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:31.782 03:25:12 env -- common/autotest_common.sh@10 -- # set +x 00:28:31.782 ************************************ 00:28:31.782 END TEST env 00:28:31.782 ************************************ 00:28:31.782 03:25:12 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:28:31.782 03:25:12 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:31.782 03:25:12 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:31.782 03:25:12 -- common/autotest_common.sh@10 -- # set +x 00:28:31.782 ************************************ 00:28:31.782 START TEST rpc 00:28:31.782 ************************************ 00:28:31.782 03:25:12 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:28:31.782 * Looking for test storage... 00:28:31.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:28:31.782 03:25:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1972745 00:28:31.782 03:25:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:28:31.782 03:25:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1972745 00:28:31.782 03:25:13 rpc -- common/autotest_common.sh@830 -- # '[' -z 1972745 ']' 00:28:31.782 03:25:13 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.782 03:25:13 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:28:31.782 03:25:13 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:31.782 03:25:13 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.782 03:25:13 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:31.782 03:25:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:31.782 [2024-06-11 03:25:13.090950] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:28:31.782 [2024-06-11 03:25:13.090991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1972745 ] 00:28:31.782 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.782 [2024-06-11 03:25:13.148141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.041 [2024-06-11 03:25:13.189065] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:28:32.041 [2024-06-11 03:25:13.189100] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1972745' to capture a snapshot of events at runtime. 00:28:32.041 [2024-06-11 03:25:13.189107] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.041 [2024-06-11 03:25:13.189113] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.041 [2024-06-11 03:25:13.189118] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1972745 for offline analysis/debug. 00:28:32.041 [2024-06-11 03:25:13.189140] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.041 03:25:13 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:32.041 03:25:13 rpc -- common/autotest_common.sh@863 -- # return 0 00:28:32.041 03:25:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:28:32.041 03:25:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:28:32.041 03:25:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:28:32.041 03:25:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:28:32.041 03:25:13 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:32.041 03:25:13 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:32.041 03:25:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:32.041 ************************************ 00:28:32.041 START TEST rpc_integrity 00:28:32.041 ************************************ 00:28:32.041 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:28:32.041 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:28:32.041 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.041 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:32.041 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.041 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:28:32.041 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:28:32.300 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:28:32.300 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:28:32.300 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.300 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:32.300 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.300 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:28:32.300 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:28:32.300 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.300 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:32.300 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.300 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:28:32.300 { 00:28:32.300 "name": "Malloc0", 00:28:32.300 "aliases": [ 00:28:32.300 "baf02f78-9d63-4bc2-8da6-2a506860b2e4" 00:28:32.300 ], 00:28:32.300 "product_name": "Malloc disk", 00:28:32.300 "block_size": 512, 00:28:32.300 "num_blocks": 16384, 00:28:32.300 "uuid": "baf02f78-9d63-4bc2-8da6-2a506860b2e4", 00:28:32.300 "assigned_rate_limits": { 00:28:32.300 "rw_ios_per_sec": 0, 00:28:32.300 "rw_mbytes_per_sec": 0, 00:28:32.300 "r_mbytes_per_sec": 0, 00:28:32.300 "w_mbytes_per_sec": 0 00:28:32.300 }, 00:28:32.300 "claimed": false, 00:28:32.300 "zoned": false, 00:28:32.300 "supported_io_types": { 00:28:32.300 "read": true, 00:28:32.300 "write": true, 00:28:32.300 "unmap": true, 00:28:32.300 "write_zeroes": true, 00:28:32.300 "flush": true, 00:28:32.300 "reset": true, 00:28:32.300 "compare": false, 00:28:32.300 "compare_and_write": false, 00:28:32.300 "abort": true, 00:28:32.300 "nvme_admin": false, 00:28:32.300 "nvme_io": false 00:28:32.300 }, 00:28:32.300 "memory_domains": [ 00:28:32.300 { 00:28:32.300 "dma_device_id": "system", 00:28:32.300 "dma_device_type": 1 00:28:32.300 }, 00:28:32.300 { 00:28:32.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:32.300 "dma_device_type": 2 00:28:32.300 } 00:28:32.300 ], 00:28:32.300 "driver_specific": {} 00:28:32.300 } 00:28:32.300 ]' 00:28:32.300 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:28:32.300 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:28:32.300 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:28:32.300 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.300 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:32.300 [2024-06-11 03:25:13.520872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:28:32.300 [2024-06-11 03:25:13.520898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:32.300 [2024-06-11 03:25:13.520910] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x131c2f0 00:28:32.300 [2024-06-11 03:25:13.520917] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:32.300 [2024-06-11 03:25:13.521936] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:32.300 [2024-06-11 03:25:13.521956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:28:32.300 Passthru0 00:28:32.300 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.300 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:28:32.300 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.300 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:32.300 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.300 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:28:32.300 { 00:28:32.300 "name": "Malloc0", 00:28:32.300 "aliases": [ 00:28:32.300 "baf02f78-9d63-4bc2-8da6-2a506860b2e4" 00:28:32.300 ], 00:28:32.300 "product_name": "Malloc disk", 00:28:32.300 "block_size": 512, 00:28:32.300 "num_blocks": 16384, 00:28:32.300 "uuid": "baf02f78-9d63-4bc2-8da6-2a506860b2e4", 00:28:32.300 "assigned_rate_limits": { 00:28:32.300 "rw_ios_per_sec": 0, 00:28:32.300 "rw_mbytes_per_sec": 0, 00:28:32.300 "r_mbytes_per_sec": 0, 00:28:32.300 "w_mbytes_per_sec": 0 00:28:32.300 }, 00:28:32.300 "claimed": true, 00:28:32.300 "claim_type": "exclusive_write", 00:28:32.300 "zoned": false, 00:28:32.300 "supported_io_types": { 00:28:32.300 "read": true, 00:28:32.300 "write": true, 00:28:32.300 "unmap": true, 00:28:32.300 "write_zeroes": true, 00:28:32.300 "flush": true, 00:28:32.300 "reset": true, 00:28:32.300 "compare": false, 00:28:32.300 "compare_and_write": false, 00:28:32.300 "abort": true, 00:28:32.300 "nvme_admin": false, 00:28:32.300 "nvme_io": false 00:28:32.300 }, 00:28:32.300 "memory_domains": [ 00:28:32.300 { 00:28:32.300 "dma_device_id": "system", 00:28:32.300 "dma_device_type": 1 00:28:32.300 }, 00:28:32.300 { 00:28:32.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:32.300 "dma_device_type": 2 00:28:32.301 } 00:28:32.301 ], 00:28:32.301 "driver_specific": {} 00:28:32.301 }, 00:28:32.301 { 00:28:32.301 "name": "Passthru0", 00:28:32.301 "aliases": [ 00:28:32.301 "ee1fd64d-d151-5807-aa95-7e23b3ca9824" 00:28:32.301 ], 00:28:32.301 "product_name": "passthru", 00:28:32.301 "block_size": 512, 00:28:32.301 "num_blocks": 16384, 00:28:32.301 "uuid": "ee1fd64d-d151-5807-aa95-7e23b3ca9824", 00:28:32.301 "assigned_rate_limits": { 00:28:32.301 "rw_ios_per_sec": 0, 00:28:32.301 "rw_mbytes_per_sec": 0, 00:28:32.301 "r_mbytes_per_sec": 0, 00:28:32.301 "w_mbytes_per_sec": 0 00:28:32.301 }, 00:28:32.301 "claimed": false, 00:28:32.301 "zoned": false, 00:28:32.301 "supported_io_types": { 00:28:32.301 "read": true, 00:28:32.301 "write": true, 00:28:32.301 "unmap": true, 00:28:32.301 "write_zeroes": true, 00:28:32.301 "flush": true, 00:28:32.301 "reset": true, 00:28:32.301 "compare": false, 00:28:32.301 "compare_and_write": false, 00:28:32.301 "abort": true, 00:28:32.301 "nvme_admin": false, 00:28:32.301 "nvme_io": false 00:28:32.301 }, 00:28:32.301 "memory_domains": [ 00:28:32.301 { 00:28:32.301 "dma_device_id": "system", 00:28:32.301 "dma_device_type": 1 00:28:32.301 }, 00:28:32.301 { 00:28:32.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:32.301 "dma_device_type": 2 00:28:32.301 } 00:28:32.301 ], 00:28:32.301 "driver_specific": { 00:28:32.301 "passthru": { 00:28:32.301 "name": "Passthru0", 00:28:32.301 "base_bdev_name": "Malloc0" 00:28:32.301 } 00:28:32.301 } 00:28:32.301 } 00:28:32.301 ]' 00:28:32.301 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:28:32.301 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:28:32.301 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:28:32.301 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.301 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:32.301 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.301 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:32.301 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.301 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:32.301 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.301 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:28:32.301 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.301 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:32.301 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.301 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:28:32.301 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:28:32.301 03:25:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:28:32.301 00:28:32.301 real 0m0.251s 00:28:32.301 user 0m0.160s 00:28:32.301 sys 0m0.023s 00:28:32.301 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:32.301 03:25:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:32.301 ************************************ 00:28:32.301 END TEST rpc_integrity 00:28:32.301 ************************************ 00:28:32.301 03:25:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:28:32.301 03:25:13 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:32.301 03:25:13 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:32.301 03:25:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:32.559 ************************************ 00:28:32.559 START TEST rpc_plugins 00:28:32.559 ************************************ 00:28:32.559 03:25:13 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:28:32.559 03:25:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:28:32.559 03:25:13 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.559 03:25:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:28:32.559 03:25:13 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.559 03:25:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:28:32.559 03:25:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:28:32.559 03:25:13 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.559 03:25:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:28:32.559 03:25:13 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.559 03:25:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:28:32.559 { 00:28:32.559 "name": "Malloc1", 00:28:32.559 "aliases": [ 00:28:32.559 "b91257ea-1363-4643-8c1c-f81d7e8b6a48" 00:28:32.559 ], 00:28:32.559 "product_name": "Malloc disk", 00:28:32.559 "block_size": 4096, 00:28:32.559 "num_blocks": 256, 00:28:32.559 "uuid": "b91257ea-1363-4643-8c1c-f81d7e8b6a48", 00:28:32.559 "assigned_rate_limits": { 00:28:32.559 "rw_ios_per_sec": 0, 00:28:32.559 "rw_mbytes_per_sec": 0, 00:28:32.559 "r_mbytes_per_sec": 0, 00:28:32.559 "w_mbytes_per_sec": 0 00:28:32.559 }, 00:28:32.559 "claimed": false, 00:28:32.559 "zoned": false, 00:28:32.559 "supported_io_types": { 00:28:32.559 "read": true, 00:28:32.559 "write": true, 00:28:32.559 "unmap": true, 00:28:32.559 "write_zeroes": true, 00:28:32.559 "flush": true, 00:28:32.559 "reset": true, 00:28:32.559 "compare": false, 00:28:32.559 "compare_and_write": false, 00:28:32.559 "abort": true, 00:28:32.559 "nvme_admin": false, 00:28:32.559 "nvme_io": false 00:28:32.559 }, 00:28:32.559 "memory_domains": [ 00:28:32.559 { 00:28:32.559 "dma_device_id": "system", 00:28:32.559 "dma_device_type": 1 00:28:32.559 }, 00:28:32.559 { 00:28:32.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:32.559 "dma_device_type": 2 00:28:32.559 } 00:28:32.559 ], 00:28:32.559 "driver_specific": {} 00:28:32.559 } 00:28:32.559 ]' 00:28:32.559 03:25:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:28:32.559 03:25:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:28:32.559 03:25:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:28:32.559 03:25:13 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.559 03:25:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:28:32.560 03:25:13 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.560 03:25:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:28:32.560 03:25:13 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.560 03:25:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:28:32.560 03:25:13 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.560 03:25:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:28:32.560 03:25:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:28:32.560 03:25:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:28:32.560 00:28:32.560 real 0m0.139s 00:28:32.560 user 0m0.090s 00:28:32.560 sys 0m0.014s 00:28:32.560 03:25:13 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:32.560 03:25:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:28:32.560 ************************************ 00:28:32.560 END TEST rpc_plugins 00:28:32.560 ************************************ 00:28:32.560 03:25:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:28:32.560 03:25:13 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:32.560 03:25:13 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:32.560 03:25:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:32.560 ************************************ 00:28:32.560 START TEST rpc_trace_cmd_test 00:28:32.560 ************************************ 00:28:32.560 03:25:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:28:32.560 03:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:28:32.560 03:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:28:32.560 03:25:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.560 03:25:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.560 03:25:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.560 03:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:28:32.560 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1972745", 00:28:32.560 "tpoint_group_mask": "0x8", 00:28:32.560 "iscsi_conn": { 00:28:32.560 "mask": "0x2", 00:28:32.560 "tpoint_mask": "0x0" 00:28:32.560 }, 00:28:32.560 "scsi": { 00:28:32.560 "mask": "0x4", 00:28:32.560 "tpoint_mask": "0x0" 00:28:32.560 }, 00:28:32.560 "bdev": { 00:28:32.560 "mask": "0x8", 00:28:32.560 "tpoint_mask": "0xffffffffffffffff" 00:28:32.560 }, 00:28:32.560 "nvmf_rdma": { 00:28:32.560 "mask": "0x10", 00:28:32.560 "tpoint_mask": "0x0" 00:28:32.560 }, 00:28:32.560 "nvmf_tcp": { 00:28:32.560 "mask": "0x20", 00:28:32.560 "tpoint_mask": "0x0" 00:28:32.560 }, 00:28:32.560 "ftl": { 00:28:32.560 "mask": "0x40", 00:28:32.560 "tpoint_mask": "0x0" 00:28:32.560 }, 00:28:32.560 "blobfs": { 00:28:32.560 "mask": "0x80", 00:28:32.560 "tpoint_mask": "0x0" 00:28:32.560 }, 00:28:32.560 "dsa": { 00:28:32.560 "mask": "0x200", 00:28:32.560 "tpoint_mask": "0x0" 00:28:32.560 }, 00:28:32.560 "thread": { 00:28:32.560 "mask": "0x400", 00:28:32.560 "tpoint_mask": "0x0" 00:28:32.560 }, 00:28:32.560 "nvme_pcie": { 00:28:32.560 "mask": "0x800", 00:28:32.560 "tpoint_mask": "0x0" 00:28:32.560 }, 00:28:32.560 "iaa": { 00:28:32.560 "mask": "0x1000", 00:28:32.560 "tpoint_mask": "0x0" 00:28:32.560 }, 00:28:32.560 "nvme_tcp": { 00:28:32.560 "mask": "0x2000", 00:28:32.560 "tpoint_mask": "0x0" 00:28:32.560 }, 00:28:32.560 "bdev_nvme": { 00:28:32.560 "mask": "0x4000", 00:28:32.560 "tpoint_mask": "0x0" 00:28:32.560 }, 00:28:32.560 "sock": { 00:28:32.560 "mask": "0x8000", 00:28:32.560 "tpoint_mask": "0x0" 00:28:32.560 } 00:28:32.560 }' 00:28:32.560 03:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:28:32.818 03:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:28:32.818 03:25:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:28:32.818 03:25:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:28:32.818 03:25:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:28:32.818 03:25:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:28:32.818 03:25:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:28:32.818 03:25:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:28:32.818 03:25:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:28:32.818 03:25:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:28:32.818 00:28:32.818 real 0m0.186s 00:28:32.818 user 0m0.152s 00:28:32.818 sys 0m0.026s 00:28:32.818 03:25:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:32.818 03:25:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.818 ************************************ 00:28:32.818 END TEST rpc_trace_cmd_test 00:28:32.818 ************************************ 00:28:32.818 03:25:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:28:32.818 03:25:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:28:32.818 03:25:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:28:32.818 03:25:14 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:32.818 03:25:14 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:32.818 03:25:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:32.818 ************************************ 00:28:32.818 START TEST rpc_daemon_integrity 00:28:32.818 ************************************ 00:28:32.818 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:28:32.818 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:28:32.818 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.818 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:32.818 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:32.818 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:28:32.818 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:28:32.818 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:28:32.818 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:28:32.818 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:32.818 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:28:33.077 { 00:28:33.077 "name": "Malloc2", 00:28:33.077 "aliases": [ 00:28:33.077 "25c1a91c-00e0-4e30-917b-a28e61d2b5eb" 00:28:33.077 ], 00:28:33.077 "product_name": "Malloc disk", 00:28:33.077 "block_size": 512, 00:28:33.077 "num_blocks": 16384, 00:28:33.077 "uuid": "25c1a91c-00e0-4e30-917b-a28e61d2b5eb", 00:28:33.077 "assigned_rate_limits": { 00:28:33.077 "rw_ios_per_sec": 0, 00:28:33.077 "rw_mbytes_per_sec": 0, 00:28:33.077 "r_mbytes_per_sec": 0, 00:28:33.077 "w_mbytes_per_sec": 0 00:28:33.077 }, 00:28:33.077 "claimed": false, 00:28:33.077 "zoned": false, 00:28:33.077 "supported_io_types": { 00:28:33.077 "read": true, 00:28:33.077 "write": true, 00:28:33.077 "unmap": true, 00:28:33.077 "write_zeroes": true, 00:28:33.077 "flush": true, 00:28:33.077 "reset": true, 00:28:33.077 "compare": false, 00:28:33.077 "compare_and_write": false, 00:28:33.077 "abort": true, 00:28:33.077 "nvme_admin": false, 00:28:33.077 "nvme_io": false 00:28:33.077 }, 00:28:33.077 "memory_domains": [ 00:28:33.077 { 00:28:33.077 "dma_device_id": "system", 00:28:33.077 "dma_device_type": 1 00:28:33.077 }, 00:28:33.077 { 00:28:33.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:33.077 "dma_device_type": 2 00:28:33.077 } 00:28:33.077 ], 00:28:33.077 "driver_specific": {} 00:28:33.077 } 00:28:33.077 ]' 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:33.077 [2024-06-11 03:25:14.290980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:28:33.077 [2024-06-11 03:25:14.291007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:33.077 [2024-06-11 03:25:14.291024] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x116ba70 00:28:33.077 [2024-06-11 03:25:14.291030] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:33.077 [2024-06-11 03:25:14.291922] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:33.077 [2024-06-11 03:25:14.291941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:28:33.077 Passthru0 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:28:33.077 { 00:28:33.077 "name": "Malloc2", 00:28:33.077 "aliases": [ 00:28:33.077 "25c1a91c-00e0-4e30-917b-a28e61d2b5eb" 00:28:33.077 ], 00:28:33.077 "product_name": "Malloc disk", 00:28:33.077 "block_size": 512, 00:28:33.077 "num_blocks": 16384, 00:28:33.077 "uuid": "25c1a91c-00e0-4e30-917b-a28e61d2b5eb", 00:28:33.077 "assigned_rate_limits": { 00:28:33.077 "rw_ios_per_sec": 0, 00:28:33.077 "rw_mbytes_per_sec": 0, 00:28:33.077 "r_mbytes_per_sec": 0, 00:28:33.077 "w_mbytes_per_sec": 0 00:28:33.077 }, 00:28:33.077 "claimed": true, 00:28:33.077 "claim_type": "exclusive_write", 00:28:33.077 "zoned": false, 00:28:33.077 "supported_io_types": { 00:28:33.077 "read": true, 00:28:33.077 "write": true, 00:28:33.077 "unmap": true, 00:28:33.077 "write_zeroes": true, 00:28:33.077 "flush": true, 00:28:33.077 "reset": true, 00:28:33.077 "compare": false, 00:28:33.077 "compare_and_write": false, 00:28:33.077 "abort": true, 00:28:33.077 "nvme_admin": false, 00:28:33.077 "nvme_io": false 00:28:33.077 }, 00:28:33.077 "memory_domains": [ 00:28:33.077 { 00:28:33.077 "dma_device_id": "system", 00:28:33.077 "dma_device_type": 1 00:28:33.077 }, 00:28:33.077 { 00:28:33.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:33.077 "dma_device_type": 2 00:28:33.077 } 00:28:33.077 ], 00:28:33.077 "driver_specific": {} 00:28:33.077 }, 00:28:33.077 { 00:28:33.077 "name": "Passthru0", 00:28:33.077 "aliases": [ 00:28:33.077 "f011cb96-80e6-56c9-a2ef-e1bc87db9833" 00:28:33.077 ], 00:28:33.077 "product_name": "passthru", 00:28:33.077 "block_size": 512, 00:28:33.077 "num_blocks": 16384, 00:28:33.077 "uuid": "f011cb96-80e6-56c9-a2ef-e1bc87db9833", 00:28:33.077 "assigned_rate_limits": { 00:28:33.077 "rw_ios_per_sec": 0, 00:28:33.077 "rw_mbytes_per_sec": 0, 00:28:33.077 "r_mbytes_per_sec": 0, 00:28:33.077 "w_mbytes_per_sec": 0 00:28:33.077 }, 00:28:33.077 "claimed": false, 00:28:33.077 "zoned": false, 00:28:33.077 "supported_io_types": { 00:28:33.077 "read": true, 00:28:33.077 "write": true, 00:28:33.077 "unmap": true, 00:28:33.077 "write_zeroes": true, 00:28:33.077 "flush": true, 00:28:33.077 "reset": true, 00:28:33.077 "compare": false, 00:28:33.077 "compare_and_write": false, 00:28:33.077 "abort": true, 00:28:33.077 "nvme_admin": false, 00:28:33.077 "nvme_io": false 00:28:33.077 }, 00:28:33.077 "memory_domains": [ 00:28:33.077 { 00:28:33.077 "dma_device_id": "system", 00:28:33.077 "dma_device_type": 1 00:28:33.077 }, 00:28:33.077 { 00:28:33.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:33.077 "dma_device_type": 2 00:28:33.077 } 00:28:33.077 ], 00:28:33.077 "driver_specific": { 00:28:33.077 "passthru": { 00:28:33.077 "name": "Passthru0", 00:28:33.077 "base_bdev_name": "Malloc2" 00:28:33.077 } 00:28:33.077 } 00:28:33.077 } 00:28:33.077 ]' 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:28:33.077 00:28:33.077 real 0m0.265s 00:28:33.077 user 0m0.161s 00:28:33.077 sys 0m0.035s 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:33.077 03:25:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:28:33.077 ************************************ 00:28:33.077 END TEST rpc_daemon_integrity 00:28:33.077 ************************************ 00:28:33.077 03:25:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:33.077 03:25:14 rpc -- rpc/rpc.sh@84 -- # killprocess 1972745 00:28:33.077 03:25:14 rpc -- common/autotest_common.sh@949 -- # '[' -z 1972745 ']' 00:28:33.077 03:25:14 rpc -- common/autotest_common.sh@953 -- # kill -0 1972745 00:28:33.077 03:25:14 rpc -- common/autotest_common.sh@954 -- # uname 00:28:33.077 03:25:14 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:33.077 03:25:14 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1972745 00:28:33.336 03:25:14 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:33.336 03:25:14 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:33.336 03:25:14 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1972745' 00:28:33.336 killing process with pid 1972745 00:28:33.336 03:25:14 rpc -- common/autotest_common.sh@968 -- # kill 1972745 00:28:33.336 03:25:14 rpc -- common/autotest_common.sh@973 -- # wait 1972745 00:28:33.593 00:28:33.593 real 0m1.826s 00:28:33.593 user 0m2.351s 00:28:33.593 sys 0m0.604s 00:28:33.593 03:25:14 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:33.593 03:25:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:28:33.593 ************************************ 00:28:33.593 END TEST rpc 00:28:33.593 ************************************ 00:28:33.593 03:25:14 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:28:33.593 03:25:14 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:33.593 03:25:14 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:33.593 03:25:14 -- common/autotest_common.sh@10 -- # set +x 00:28:33.593 ************************************ 00:28:33.593 START TEST skip_rpc 00:28:33.593 ************************************ 00:28:33.593 03:25:14 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:28:33.593 * Looking for test storage... 00:28:33.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:28:33.593 03:25:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:28:33.593 03:25:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:28:33.593 03:25:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:28:33.593 03:25:14 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:33.593 03:25:14 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:33.593 03:25:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:33.593 ************************************ 00:28:33.593 START TEST skip_rpc 00:28:33.593 ************************************ 00:28:33.593 03:25:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:28:33.593 03:25:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1973159 00:28:33.593 03:25:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:28:33.593 03:25:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:28:33.593 03:25:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:28:33.851 [2024-06-11 03:25:15.019550] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:28:33.851 [2024-06-11 03:25:15.019589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973159 ] 00:28:33.851 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.851 [2024-06-11 03:25:15.078374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.851 [2024-06-11 03:25:15.117812] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1973159 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 1973159 ']' 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 1973159 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:39.117 03:25:19 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1973159 00:28:39.117 03:25:20 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:39.117 03:25:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:39.117 03:25:20 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1973159' 00:28:39.117 killing process with pid 1973159 00:28:39.117 03:25:20 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 1973159 00:28:39.117 03:25:20 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 1973159 00:28:39.117 00:28:39.117 real 0m5.355s 00:28:39.117 user 0m5.123s 00:28:39.117 sys 0m0.264s 00:28:39.117 03:25:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:39.117 03:25:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:39.117 ************************************ 00:28:39.117 END TEST skip_rpc 00:28:39.117 ************************************ 00:28:39.117 03:25:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:28:39.117 03:25:20 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:39.117 03:25:20 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:39.117 03:25:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:39.117 ************************************ 00:28:39.117 START TEST skip_rpc_with_json 00:28:39.117 ************************************ 00:28:39.117 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:28:39.117 03:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:28:39.117 03:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1974100 00:28:39.117 03:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:28:39.117 03:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:28:39.117 03:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1974100 00:28:39.117 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 1974100 ']' 00:28:39.117 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.117 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:39.117 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.117 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:39.117 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:28:39.117 [2024-06-11 03:25:20.442446] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:28:39.117 [2024-06-11 03:25:20.442487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974100 ] 00:28:39.117 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.117 [2024-06-11 03:25:20.500333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.376 [2024-06-11 03:25:20.538944] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.376 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:39.376 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:28:39.376 03:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:28:39.376 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:39.376 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:28:39.376 [2024-06-11 03:25:20.726463] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:28:39.376 request: 00:28:39.376 { 00:28:39.376 "trtype": "tcp", 00:28:39.376 "method": "nvmf_get_transports", 00:28:39.376 "req_id": 1 00:28:39.376 } 00:28:39.376 Got JSON-RPC error response 00:28:39.376 response: 00:28:39.376 { 00:28:39.376 "code": -19, 00:28:39.376 "message": "No such device" 00:28:39.376 } 00:28:39.376 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:39.376 03:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:28:39.376 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:39.376 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:28:39.376 [2024-06-11 03:25:20.738562] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.376 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:39.376 03:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:28:39.376 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:39.376 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:28:39.635 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:39.635 03:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:28:39.635 { 00:28:39.635 "subsystems": [ 00:28:39.635 { 00:28:39.635 "subsystem": "vfio_user_target", 00:28:39.635 "config": null 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "subsystem": "keyring", 00:28:39.635 "config": [] 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "subsystem": "iobuf", 00:28:39.635 "config": [ 00:28:39.635 { 00:28:39.635 "method": "iobuf_set_options", 00:28:39.635 "params": { 00:28:39.635 "small_pool_count": 8192, 00:28:39.635 "large_pool_count": 1024, 00:28:39.635 "small_bufsize": 8192, 00:28:39.635 "large_bufsize": 135168 00:28:39.635 } 00:28:39.635 } 00:28:39.635 ] 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "subsystem": "sock", 00:28:39.635 "config": [ 00:28:39.635 { 00:28:39.635 "method": "sock_set_default_impl", 00:28:39.635 "params": { 00:28:39.635 "impl_name": "posix" 00:28:39.635 } 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "method": "sock_impl_set_options", 00:28:39.635 "params": { 00:28:39.635 "impl_name": "ssl", 00:28:39.635 "recv_buf_size": 4096, 00:28:39.635 "send_buf_size": 4096, 00:28:39.635 "enable_recv_pipe": true, 00:28:39.635 "enable_quickack": false, 00:28:39.635 "enable_placement_id": 0, 00:28:39.635 "enable_zerocopy_send_server": true, 00:28:39.635 "enable_zerocopy_send_client": false, 00:28:39.635 "zerocopy_threshold": 0, 00:28:39.635 "tls_version": 0, 00:28:39.635 "enable_ktls": false 00:28:39.635 } 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "method": "sock_impl_set_options", 00:28:39.635 "params": { 00:28:39.635 "impl_name": "posix", 00:28:39.635 "recv_buf_size": 2097152, 00:28:39.635 "send_buf_size": 2097152, 00:28:39.635 "enable_recv_pipe": true, 00:28:39.635 "enable_quickack": false, 00:28:39.635 "enable_placement_id": 0, 00:28:39.635 "enable_zerocopy_send_server": true, 00:28:39.635 "enable_zerocopy_send_client": false, 00:28:39.635 "zerocopy_threshold": 0, 00:28:39.635 "tls_version": 0, 00:28:39.635 "enable_ktls": false 00:28:39.635 } 00:28:39.635 } 00:28:39.635 ] 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "subsystem": "vmd", 00:28:39.635 "config": [] 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "subsystem": "accel", 00:28:39.635 "config": [ 00:28:39.635 { 00:28:39.635 "method": "accel_set_options", 00:28:39.635 "params": { 00:28:39.635 "small_cache_size": 128, 00:28:39.635 "large_cache_size": 16, 00:28:39.635 "task_count": 2048, 00:28:39.635 "sequence_count": 2048, 00:28:39.635 "buf_count": 2048 00:28:39.635 } 00:28:39.635 } 00:28:39.635 ] 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "subsystem": "bdev", 00:28:39.635 "config": [ 00:28:39.635 { 00:28:39.635 "method": "bdev_set_options", 00:28:39.635 "params": { 00:28:39.635 "bdev_io_pool_size": 65535, 00:28:39.635 "bdev_io_cache_size": 256, 00:28:39.635 "bdev_auto_examine": true, 00:28:39.635 "iobuf_small_cache_size": 128, 00:28:39.635 "iobuf_large_cache_size": 16 00:28:39.635 } 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "method": "bdev_raid_set_options", 00:28:39.635 "params": { 00:28:39.635 "process_window_size_kb": 1024 00:28:39.635 } 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "method": "bdev_iscsi_set_options", 00:28:39.635 "params": { 00:28:39.635 "timeout_sec": 30 00:28:39.635 } 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "method": "bdev_nvme_set_options", 00:28:39.635 "params": { 00:28:39.635 "action_on_timeout": "none", 00:28:39.635 "timeout_us": 0, 00:28:39.635 "timeout_admin_us": 0, 00:28:39.635 "keep_alive_timeout_ms": 10000, 00:28:39.635 "arbitration_burst": 0, 00:28:39.635 "low_priority_weight": 0, 00:28:39.635 "medium_priority_weight": 0, 00:28:39.635 "high_priority_weight": 0, 00:28:39.635 "nvme_adminq_poll_period_us": 10000, 00:28:39.635 "nvme_ioq_poll_period_us": 0, 00:28:39.635 "io_queue_requests": 0, 00:28:39.635 "delay_cmd_submit": true, 00:28:39.635 "transport_retry_count": 4, 00:28:39.635 "bdev_retry_count": 3, 00:28:39.635 "transport_ack_timeout": 0, 00:28:39.635 "ctrlr_loss_timeout_sec": 0, 00:28:39.635 "reconnect_delay_sec": 0, 00:28:39.635 "fast_io_fail_timeout_sec": 0, 00:28:39.635 "disable_auto_failback": false, 00:28:39.635 "generate_uuids": false, 00:28:39.635 "transport_tos": 0, 00:28:39.635 "nvme_error_stat": false, 00:28:39.635 "rdma_srq_size": 0, 00:28:39.635 "io_path_stat": false, 00:28:39.635 "allow_accel_sequence": false, 00:28:39.635 "rdma_max_cq_size": 0, 00:28:39.635 "rdma_cm_event_timeout_ms": 0, 00:28:39.635 "dhchap_digests": [ 00:28:39.635 "sha256", 00:28:39.635 "sha384", 00:28:39.635 "sha512" 00:28:39.635 ], 00:28:39.635 "dhchap_dhgroups": [ 00:28:39.635 "null", 00:28:39.635 "ffdhe2048", 00:28:39.635 "ffdhe3072", 00:28:39.635 "ffdhe4096", 00:28:39.635 "ffdhe6144", 00:28:39.635 "ffdhe8192" 00:28:39.635 ] 00:28:39.635 } 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "method": "bdev_nvme_set_hotplug", 00:28:39.635 "params": { 00:28:39.635 "period_us": 100000, 00:28:39.635 "enable": false 00:28:39.635 } 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "method": "bdev_wait_for_examine" 00:28:39.635 } 00:28:39.635 ] 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "subsystem": "scsi", 00:28:39.635 "config": null 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "subsystem": "scheduler", 00:28:39.635 "config": [ 00:28:39.635 { 00:28:39.635 "method": "framework_set_scheduler", 00:28:39.635 "params": { 00:28:39.635 "name": "static" 00:28:39.635 } 00:28:39.635 } 00:28:39.635 ] 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "subsystem": "vhost_scsi", 00:28:39.635 "config": [] 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "subsystem": "vhost_blk", 00:28:39.635 "config": [] 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "subsystem": "ublk", 00:28:39.635 "config": [] 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "subsystem": "nbd", 00:28:39.635 "config": [] 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "subsystem": "nvmf", 00:28:39.635 "config": [ 00:28:39.635 { 00:28:39.635 "method": "nvmf_set_config", 00:28:39.635 "params": { 00:28:39.635 "discovery_filter": "match_any", 00:28:39.635 "admin_cmd_passthru": { 00:28:39.635 "identify_ctrlr": false 00:28:39.635 } 00:28:39.635 } 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "method": "nvmf_set_max_subsystems", 00:28:39.635 "params": { 00:28:39.635 "max_subsystems": 1024 00:28:39.635 } 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "method": "nvmf_set_crdt", 00:28:39.635 "params": { 00:28:39.635 "crdt1": 0, 00:28:39.635 "crdt2": 0, 00:28:39.635 "crdt3": 0 00:28:39.635 } 00:28:39.635 }, 00:28:39.635 { 00:28:39.635 "method": "nvmf_create_transport", 00:28:39.635 "params": { 00:28:39.635 "trtype": "TCP", 00:28:39.635 "max_queue_depth": 128, 00:28:39.635 "max_io_qpairs_per_ctrlr": 127, 00:28:39.635 "in_capsule_data_size": 4096, 00:28:39.635 "max_io_size": 131072, 00:28:39.635 "io_unit_size": 131072, 00:28:39.635 "max_aq_depth": 128, 00:28:39.635 "num_shared_buffers": 511, 00:28:39.635 "buf_cache_size": 4294967295, 00:28:39.635 "dif_insert_or_strip": false, 00:28:39.635 "zcopy": false, 00:28:39.635 "c2h_success": true, 00:28:39.635 "sock_priority": 0, 00:28:39.635 "abort_timeout_sec": 1, 00:28:39.635 "ack_timeout": 0, 00:28:39.635 "data_wr_pool_size": 0 00:28:39.635 } 00:28:39.635 } 00:28:39.635 ] 00:28:39.636 }, 00:28:39.636 { 00:28:39.636 "subsystem": "iscsi", 00:28:39.636 "config": [ 00:28:39.636 { 00:28:39.636 "method": "iscsi_set_options", 00:28:39.636 "params": { 00:28:39.636 "node_base": "iqn.2016-06.io.spdk", 00:28:39.636 "max_sessions": 128, 00:28:39.636 "max_connections_per_session": 2, 00:28:39.636 "max_queue_depth": 64, 00:28:39.636 "default_time2wait": 2, 00:28:39.636 "default_time2retain": 20, 00:28:39.636 "first_burst_length": 8192, 00:28:39.636 "immediate_data": true, 00:28:39.636 "allow_duplicated_isid": false, 00:28:39.636 "error_recovery_level": 0, 00:28:39.636 "nop_timeout": 60, 00:28:39.636 "nop_in_interval": 30, 00:28:39.636 "disable_chap": false, 00:28:39.636 "require_chap": false, 00:28:39.636 "mutual_chap": false, 00:28:39.636 "chap_group": 0, 00:28:39.636 "max_large_datain_per_connection": 64, 00:28:39.636 "max_r2t_per_connection": 4, 00:28:39.636 "pdu_pool_size": 36864, 00:28:39.636 "immediate_data_pool_size": 16384, 00:28:39.636 "data_out_pool_size": 2048 00:28:39.636 } 00:28:39.636 } 00:28:39.636 ] 00:28:39.636 } 00:28:39.636 ] 00:28:39.636 } 00:28:39.636 03:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:28:39.636 03:25:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1974100 00:28:39.636 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 1974100 ']' 00:28:39.636 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 1974100 00:28:39.636 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:28:39.636 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:39.636 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1974100 00:28:39.636 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:39.636 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:39.636 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1974100' 00:28:39.636 killing process with pid 1974100 00:28:39.636 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 1974100 00:28:39.636 03:25:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 1974100 00:28:39.895 03:25:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1974313 00:28:39.895 03:25:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:28:39.895 03:25:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:28:45.159 03:25:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1974313 00:28:45.159 03:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 1974313 ']' 00:28:45.159 03:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 1974313 00:28:45.159 03:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:28:45.159 03:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:45.159 03:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1974313 00:28:45.159 03:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:45.159 03:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:45.159 03:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1974313' 00:28:45.159 killing process with pid 1974313 00:28:45.159 03:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 1974313 00:28:45.159 03:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 1974313 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:28:45.418 00:28:45.418 real 0m6.207s 00:28:45.418 user 0m5.892s 00:28:45.418 sys 0m0.573s 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:28:45.418 ************************************ 00:28:45.418 END TEST skip_rpc_with_json 00:28:45.418 ************************************ 00:28:45.418 03:25:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:28:45.418 03:25:26 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:45.418 03:25:26 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:45.418 03:25:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:45.418 ************************************ 00:28:45.418 START TEST skip_rpc_with_delay 00:28:45.418 ************************************ 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:28:45.418 [2024-06-11 03:25:26.712666] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:28:45.418 [2024-06-11 03:25:26.712720] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:45.418 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:45.418 00:28:45.419 real 0m0.062s 00:28:45.419 user 0m0.038s 00:28:45.419 sys 0m0.023s 00:28:45.419 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:45.419 03:25:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:28:45.419 ************************************ 00:28:45.419 END TEST skip_rpc_with_delay 00:28:45.419 ************************************ 00:28:45.419 03:25:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:28:45.419 03:25:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:28:45.419 03:25:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:28:45.419 03:25:26 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:45.419 03:25:26 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:45.419 03:25:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:45.419 ************************************ 00:28:45.419 START TEST exit_on_failed_rpc_init 00:28:45.419 ************************************ 00:28:45.419 03:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:28:45.419 03:25:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:28:45.419 03:25:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1975311 00:28:45.419 03:25:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1975311 00:28:45.419 03:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 1975311 ']' 00:28:45.419 03:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.419 03:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:45.419 03:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.419 03:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:45.419 03:25:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:28:45.419 [2024-06-11 03:25:26.811742] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:28:45.419 [2024-06-11 03:25:26.811776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975311 ] 00:28:45.678 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.678 [2024-06-11 03:25:26.869356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.678 [2024-06-11 03:25:26.910479] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:28:45.936 [2024-06-11 03:25:27.144576] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:28:45.936 [2024-06-11 03:25:27.144622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975317 ] 00:28:45.936 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.936 [2024-06-11 03:25:27.201994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.936 [2024-06-11 03:25:27.241660] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.936 [2024-06-11 03:25:27.241722] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:45.936 [2024-06-11 03:25:27.241731] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:45.936 [2024-06-11 03:25:27.241737] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1975311 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 1975311 ']' 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 1975311 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:45.936 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1975311 00:28:46.194 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:46.194 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:46.194 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1975311' 00:28:46.194 killing process with pid 1975311 00:28:46.194 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 1975311 00:28:46.194 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 1975311 00:28:46.452 00:28:46.452 real 0m0.864s 00:28:46.452 user 0m0.915s 00:28:46.452 sys 0m0.365s 00:28:46.452 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:46.452 03:25:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:28:46.452 ************************************ 00:28:46.452 END TEST exit_on_failed_rpc_init 00:28:46.452 ************************************ 00:28:46.452 03:25:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:28:46.452 00:28:46.452 real 0m12.831s 00:28:46.452 user 0m12.096s 00:28:46.452 sys 0m1.463s 00:28:46.452 03:25:27 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:46.452 03:25:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:46.452 ************************************ 00:28:46.452 END TEST skip_rpc 00:28:46.452 ************************************ 00:28:46.452 03:25:27 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:28:46.452 03:25:27 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:46.452 03:25:27 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:46.452 03:25:27 -- common/autotest_common.sh@10 -- # set +x 00:28:46.452 ************************************ 00:28:46.452 START TEST rpc_client 00:28:46.452 ************************************ 00:28:46.452 03:25:27 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:28:46.452 * Looking for test storage... 00:28:46.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:28:46.452 03:25:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:28:46.452 OK 00:28:46.452 03:25:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:28:46.452 00:28:46.452 real 0m0.084s 00:28:46.452 user 0m0.035s 00:28:46.452 sys 0m0.056s 00:28:46.452 03:25:27 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:46.452 03:25:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:28:46.452 ************************************ 00:28:46.452 END TEST rpc_client 00:28:46.452 ************************************ 00:28:46.452 03:25:27 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:28:46.452 03:25:27 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:46.452 03:25:27 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:46.452 03:25:27 -- common/autotest_common.sh@10 -- # set +x 00:28:46.711 ************************************ 00:28:46.711 START TEST json_config 00:28:46.711 ************************************ 00:28:46.711 03:25:27 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:28:46.711 03:25:27 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.711 03:25:27 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.711 03:25:27 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.711 03:25:27 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.711 03:25:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.711 03:25:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.711 03:25:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.711 03:25:27 json_config -- paths/export.sh@5 -- # export PATH 00:28:46.711 03:25:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@47 -- # : 0 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:46.711 03:25:27 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:46.711 03:25:27 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:28:46.711 03:25:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:28:46.711 03:25:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:28:46.711 03:25:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:28:46.711 03:25:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:28:46.711 03:25:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:28:46.711 03:25:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:28:46.712 03:25:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:28:46.712 03:25:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:28:46.712 03:25:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:28:46.712 03:25:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:28:46.712 03:25:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:28:46.712 03:25:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:28:46.712 03:25:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:28:46.712 03:25:27 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:28:46.712 03:25:27 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:28:46.712 INFO: JSON configuration test init 00:28:46.712 03:25:27 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:28:46.712 03:25:27 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:28:46.712 03:25:27 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:46.712 03:25:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:46.712 03:25:27 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:28:46.712 03:25:27 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:46.712 03:25:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:46.712 03:25:27 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:28:46.712 03:25:27 json_config -- json_config/common.sh@9 -- # local app=target 00:28:46.712 03:25:27 json_config -- json_config/common.sh@10 -- # shift 00:28:46.712 03:25:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:28:46.712 03:25:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:28:46.712 03:25:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:28:46.712 03:25:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:28:46.712 03:25:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:28:46.712 03:25:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1975549 00:28:46.712 03:25:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:28:46.712 Waiting for target to run... 00:28:46.712 03:25:27 json_config -- json_config/common.sh@25 -- # waitforlisten 1975549 /var/tmp/spdk_tgt.sock 00:28:46.712 03:25:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:28:46.712 03:25:27 json_config -- common/autotest_common.sh@830 -- # '[' -z 1975549 ']' 00:28:46.712 03:25:27 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:28:46.712 03:25:27 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:46.712 03:25:27 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:28:46.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:28:46.712 03:25:27 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:46.712 03:25:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:46.712 [2024-06-11 03:25:28.025969] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:28:46.712 [2024-06-11 03:25:28.026022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975549 ] 00:28:46.712 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.970 [2024-06-11 03:25:28.291775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.970 [2024-06-11 03:25:28.315892] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.539 03:25:28 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:47.539 03:25:28 json_config -- common/autotest_common.sh@863 -- # return 0 00:28:47.539 03:25:28 json_config -- json_config/common.sh@26 -- # echo '' 00:28:47.539 00:28:47.539 03:25:28 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:28:47.539 03:25:28 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:28:47.539 03:25:28 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:47.539 03:25:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:47.539 03:25:28 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:28:47.540 03:25:28 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:28:47.540 03:25:28 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:47.540 03:25:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:47.540 03:25:28 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:28:47.540 03:25:28 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:28:47.540 03:25:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:28:50.853 03:25:31 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:28:50.853 03:25:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:28:50.853 03:25:31 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:50.853 03:25:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:50.853 03:25:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:28:50.853 03:25:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:28:50.853 03:25:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:28:50.853 03:25:31 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:28:50.853 03:25:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:28:50.853 03:25:31 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:28:50.853 03:25:32 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:28:50.853 03:25:32 json_config -- json_config/json_config.sh@48 -- # local get_types 00:28:50.853 03:25:32 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:28:50.853 03:25:32 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:28:50.853 03:25:32 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:50.853 03:25:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:50.853 03:25:32 json_config -- json_config/json_config.sh@55 -- # return 0 00:28:50.853 03:25:32 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:28:50.853 03:25:32 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:28:50.853 03:25:32 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:28:50.853 03:25:32 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:28:50.853 03:25:32 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:28:50.853 03:25:32 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:28:50.853 03:25:32 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:50.853 03:25:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:50.853 03:25:32 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:28:50.853 03:25:32 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:28:50.853 03:25:32 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:28:50.853 03:25:32 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:28:50.853 03:25:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:28:51.112 MallocForNvmf0 00:28:51.112 03:25:32 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:28:51.112 03:25:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:28:51.112 MallocForNvmf1 00:28:51.112 03:25:32 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:28:51.112 03:25:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:28:51.370 [2024-06-11 03:25:32.591504] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.370 03:25:32 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:51.370 03:25:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:51.370 03:25:32 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:28:51.370 03:25:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:28:51.628 03:25:32 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:28:51.628 03:25:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:28:51.886 03:25:33 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:28:51.886 03:25:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:28:51.886 [2024-06-11 03:25:33.249587] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:51.886 03:25:33 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:28:51.886 03:25:33 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:51.886 03:25:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:52.144 03:25:33 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:28:52.144 03:25:33 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:52.144 03:25:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:52.144 03:25:33 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:28:52.144 03:25:33 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:28:52.144 03:25:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:28:52.144 MallocBdevForConfigChangeCheck 00:28:52.144 03:25:33 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:28:52.144 03:25:33 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:52.144 03:25:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:52.144 03:25:33 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:28:52.144 03:25:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:28:52.710 03:25:33 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:28:52.710 INFO: shutting down applications... 00:28:52.710 03:25:33 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:28:52.710 03:25:33 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:28:52.710 03:25:33 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:28:52.710 03:25:33 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:28:54.610 Calling clear_iscsi_subsystem 00:28:54.610 Calling clear_nvmf_subsystem 00:28:54.610 Calling clear_nbd_subsystem 00:28:54.610 Calling clear_ublk_subsystem 00:28:54.610 Calling clear_vhost_blk_subsystem 00:28:54.610 Calling clear_vhost_scsi_subsystem 00:28:54.610 Calling clear_bdev_subsystem 00:28:54.868 03:25:36 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:28:54.868 03:25:36 json_config -- json_config/json_config.sh@343 -- # count=100 00:28:54.868 03:25:36 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:28:54.868 03:25:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:28:54.868 03:25:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:28:54.868 03:25:36 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:28:55.127 03:25:36 json_config -- json_config/json_config.sh@345 -- # break 00:28:55.127 03:25:36 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:28:55.127 03:25:36 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:28:55.127 03:25:36 json_config -- json_config/common.sh@31 -- # local app=target 00:28:55.127 03:25:36 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:28:55.127 03:25:36 json_config -- json_config/common.sh@35 -- # [[ -n 1975549 ]] 00:28:55.127 03:25:36 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1975549 00:28:55.127 03:25:36 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:28:55.127 03:25:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:28:55.127 03:25:36 json_config -- json_config/common.sh@41 -- # kill -0 1975549 00:28:55.127 03:25:36 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:28:55.695 03:25:36 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:28:55.695 03:25:36 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:28:55.695 03:25:36 json_config -- json_config/common.sh@41 -- # kill -0 1975549 00:28:55.695 03:25:36 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:28:55.695 03:25:36 json_config -- json_config/common.sh@43 -- # break 00:28:55.695 03:25:36 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:28:55.695 03:25:36 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:28:55.695 SPDK target shutdown done 00:28:55.695 03:25:36 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:28:55.695 INFO: relaunching applications... 00:28:55.695 03:25:36 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:28:55.695 03:25:36 json_config -- json_config/common.sh@9 -- # local app=target 00:28:55.695 03:25:36 json_config -- json_config/common.sh@10 -- # shift 00:28:55.695 03:25:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:28:55.695 03:25:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:28:55.695 03:25:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:28:55.695 03:25:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:28:55.695 03:25:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:28:55.695 03:25:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1977170 00:28:55.695 03:25:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:28:55.695 Waiting for target to run... 00:28:55.695 03:25:36 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:28:55.695 03:25:36 json_config -- json_config/common.sh@25 -- # waitforlisten 1977170 /var/tmp/spdk_tgt.sock 00:28:55.695 03:25:36 json_config -- common/autotest_common.sh@830 -- # '[' -z 1977170 ']' 00:28:55.695 03:25:36 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:28:55.695 03:25:36 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:55.695 03:25:36 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:28:55.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:28:55.695 03:25:36 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:55.695 03:25:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:55.695 [2024-06-11 03:25:36.910123] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:28:55.695 [2024-06-11 03:25:36.910179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977170 ] 00:28:55.695 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.953 [2024-06-11 03:25:37.193262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.953 [2024-06-11 03:25:37.217410] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.336 [2024-06-11 03:25:40.213836] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.336 [2024-06-11 03:25:40.246189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:59.336 03:25:40 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:59.336 03:25:40 json_config -- common/autotest_common.sh@863 -- # return 0 00:28:59.336 03:25:40 json_config -- json_config/common.sh@26 -- # echo '' 00:28:59.336 00:28:59.336 03:25:40 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:28:59.336 03:25:40 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:28:59.336 INFO: Checking if target configuration is the same... 00:28:59.336 03:25:40 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:28:59.336 03:25:40 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:28:59.336 03:25:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:28:59.336 + '[' 2 -ne 2 ']' 00:28:59.336 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:28:59.336 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:28:59.336 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:59.336 +++ basename /dev/fd/62 00:28:59.336 ++ mktemp /tmp/62.XXX 00:28:59.336 + tmp_file_1=/tmp/62.pfg 00:28:59.336 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:28:59.336 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:28:59.336 + tmp_file_2=/tmp/spdk_tgt_config.json.MIf 00:28:59.336 + ret=0 00:28:59.336 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:28:59.336 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:28:59.336 + diff -u /tmp/62.pfg /tmp/spdk_tgt_config.json.MIf 00:28:59.336 + echo 'INFO: JSON config files are the same' 00:28:59.336 INFO: JSON config files are the same 00:28:59.336 + rm /tmp/62.pfg /tmp/spdk_tgt_config.json.MIf 00:28:59.336 + exit 0 00:28:59.336 03:25:40 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:28:59.336 03:25:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:28:59.336 INFO: changing configuration and checking if this can be detected... 00:28:59.336 03:25:40 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:28:59.336 03:25:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:28:59.608 03:25:40 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:28:59.608 03:25:40 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:28:59.608 03:25:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:28:59.608 + '[' 2 -ne 2 ']' 00:28:59.608 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:28:59.608 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:28:59.608 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:59.608 +++ basename /dev/fd/62 00:28:59.608 ++ mktemp /tmp/62.XXX 00:28:59.608 + tmp_file_1=/tmp/62.MmR 00:28:59.608 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:28:59.608 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:28:59.609 + tmp_file_2=/tmp/spdk_tgt_config.json.cZj 00:28:59.609 + ret=0 00:28:59.609 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:28:59.867 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:28:59.867 + diff -u /tmp/62.MmR /tmp/spdk_tgt_config.json.cZj 00:28:59.867 + ret=1 00:28:59.867 + echo '=== Start of file: /tmp/62.MmR ===' 00:28:59.867 + cat /tmp/62.MmR 00:28:59.867 + echo '=== End of file: /tmp/62.MmR ===' 00:28:59.867 + echo '' 00:28:59.867 + echo '=== Start of file: /tmp/spdk_tgt_config.json.cZj ===' 00:28:59.867 + cat /tmp/spdk_tgt_config.json.cZj 00:28:59.867 + echo '=== End of file: /tmp/spdk_tgt_config.json.cZj ===' 00:28:59.867 + echo '' 00:28:59.867 + rm /tmp/62.MmR /tmp/spdk_tgt_config.json.cZj 00:28:59.867 + exit 1 00:28:59.867 03:25:41 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:28:59.867 INFO: configuration change detected. 00:28:59.867 03:25:41 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:28:59.867 03:25:41 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:28:59.867 03:25:41 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:59.867 03:25:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:59.867 03:25:41 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:28:59.867 03:25:41 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:28:59.867 03:25:41 json_config -- json_config/json_config.sh@317 -- # [[ -n 1977170 ]] 00:28:59.867 03:25:41 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:28:59.867 03:25:41 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:28:59.867 03:25:41 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:59.867 03:25:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:59.867 03:25:41 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:28:59.867 03:25:41 json_config -- json_config/json_config.sh@193 -- # uname -s 00:28:59.867 03:25:41 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:28:59.867 03:25:41 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:28:59.867 03:25:41 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:28:59.867 03:25:41 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:28:59.867 03:25:41 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:59.867 03:25:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:28:59.867 03:25:41 json_config -- json_config/json_config.sh@323 -- # killprocess 1977170 00:28:59.867 03:25:41 json_config -- common/autotest_common.sh@949 -- # '[' -z 1977170 ']' 00:28:59.867 03:25:41 json_config -- common/autotest_common.sh@953 -- # kill -0 1977170 00:28:59.867 03:25:41 json_config -- common/autotest_common.sh@954 -- # uname 00:28:59.867 03:25:41 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:00.125 03:25:41 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1977170 00:29:00.125 03:25:41 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:00.125 03:25:41 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:00.125 03:25:41 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1977170' 00:29:00.125 killing process with pid 1977170 00:29:00.125 03:25:41 json_config -- common/autotest_common.sh@968 -- # kill 1977170 00:29:00.125 03:25:41 json_config -- common/autotest_common.sh@973 -- # wait 1977170 00:29:02.024 03:25:43 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:29:02.024 03:25:43 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:29:02.024 03:25:43 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:02.024 03:25:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:02.024 03:25:43 json_config -- json_config/json_config.sh@328 -- # return 0 00:29:02.024 03:25:43 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:29:02.024 INFO: Success 00:29:02.024 00:29:02.024 real 0m15.458s 00:29:02.024 user 0m16.205s 00:29:02.024 sys 0m1.698s 00:29:02.024 03:25:43 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:02.024 03:25:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:29:02.024 ************************************ 00:29:02.024 END TEST json_config 00:29:02.024 ************************************ 00:29:02.024 03:25:43 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:29:02.024 03:25:43 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:02.024 03:25:43 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:02.024 03:25:43 -- common/autotest_common.sh@10 -- # set +x 00:29:02.024 ************************************ 00:29:02.024 START TEST json_config_extra_key 00:29:02.024 ************************************ 00:29:02.024 03:25:43 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:29:02.283 03:25:43 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.283 03:25:43 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.283 03:25:43 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.283 03:25:43 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.283 03:25:43 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.283 03:25:43 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.283 03:25:43 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.283 03:25:43 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:29:02.283 03:25:43 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:02.283 03:25:43 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:02.283 03:25:43 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:29:02.283 03:25:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:29:02.283 03:25:43 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:29:02.283 03:25:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:29:02.283 03:25:43 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:29:02.283 03:25:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:29:02.283 03:25:43 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:29:02.283 03:25:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:29:02.283 03:25:43 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:29:02.283 03:25:43 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:29:02.283 03:25:43 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:29:02.283 INFO: launching applications... 00:29:02.283 03:25:43 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:29:02.283 03:25:43 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:29:02.283 03:25:43 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:29:02.283 03:25:43 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:29:02.283 03:25:43 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:29:02.283 03:25:43 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:29:02.283 03:25:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:29:02.283 03:25:43 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:29:02.283 03:25:43 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1978431 00:29:02.283 03:25:43 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:29:02.283 Waiting for target to run... 00:29:02.283 03:25:43 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1978431 /var/tmp/spdk_tgt.sock 00:29:02.284 03:25:43 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 1978431 ']' 00:29:02.284 03:25:43 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:29:02.284 03:25:43 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:29:02.284 03:25:43 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:02.284 03:25:43 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:29:02.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:29:02.284 03:25:43 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:02.284 03:25:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:29:02.284 [2024-06-11 03:25:43.539173] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:02.284 [2024-06-11 03:25:43.539214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978431 ] 00:29:02.284 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.542 [2024-06-11 03:25:43.811690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.542 [2024-06-11 03:25:43.835162] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.108 03:25:44 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:03.108 03:25:44 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:29:03.108 03:25:44 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:29:03.108 00:29:03.108 03:25:44 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:29:03.108 INFO: shutting down applications... 00:29:03.108 03:25:44 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:29:03.108 03:25:44 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:29:03.108 03:25:44 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:29:03.108 03:25:44 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1978431 ]] 00:29:03.108 03:25:44 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1978431 00:29:03.108 03:25:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:29:03.108 03:25:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:03.108 03:25:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1978431 00:29:03.108 03:25:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:29:03.676 03:25:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:29:03.676 03:25:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:29:03.676 03:25:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1978431 00:29:03.676 03:25:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:29:03.676 03:25:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:29:03.676 03:25:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:29:03.676 03:25:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:29:03.676 SPDK target shutdown done 00:29:03.676 03:25:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:29:03.676 Success 00:29:03.676 00:29:03.676 real 0m1.435s 00:29:03.676 user 0m1.204s 00:29:03.676 sys 0m0.359s 00:29:03.676 03:25:44 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:03.676 03:25:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:29:03.676 ************************************ 00:29:03.676 END TEST json_config_extra_key 00:29:03.676 ************************************ 00:29:03.676 03:25:44 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:29:03.676 03:25:44 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:03.676 03:25:44 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:03.676 03:25:44 -- common/autotest_common.sh@10 -- # set +x 00:29:03.676 ************************************ 00:29:03.676 START TEST alias_rpc 00:29:03.676 ************************************ 00:29:03.676 03:25:44 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:29:03.676 * Looking for test storage... 00:29:03.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:29:03.676 03:25:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:29:03.676 03:25:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1978712 00:29:03.676 03:25:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1978712 00:29:03.676 03:25:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:03.676 03:25:44 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 1978712 ']' 00:29:03.676 03:25:44 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.676 03:25:44 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:03.676 03:25:44 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.676 03:25:44 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:03.676 03:25:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:03.676 [2024-06-11 03:25:45.034449] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:03.676 [2024-06-11 03:25:45.034497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978712 ] 00:29:03.676 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.935 [2024-06-11 03:25:45.094045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.935 [2024-06-11 03:25:45.134222] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.500 03:25:45 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:04.500 03:25:45 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:29:04.500 03:25:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:29:04.758 03:25:46 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1978712 00:29:04.758 03:25:46 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 1978712 ']' 00:29:04.758 03:25:46 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 1978712 00:29:04.758 03:25:46 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:29:04.758 03:25:46 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:04.758 03:25:46 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1978712 00:29:04.758 03:25:46 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:04.758 03:25:46 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:04.758 03:25:46 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1978712' 00:29:04.758 killing process with pid 1978712 00:29:04.758 03:25:46 alias_rpc -- common/autotest_common.sh@968 -- # kill 1978712 00:29:04.758 03:25:46 alias_rpc -- common/autotest_common.sh@973 -- # wait 1978712 00:29:05.016 00:29:05.016 real 0m1.451s 00:29:05.016 user 0m1.599s 00:29:05.016 sys 0m0.382s 00:29:05.016 03:25:46 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:05.016 03:25:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:05.016 ************************************ 00:29:05.016 END TEST alias_rpc 00:29:05.016 ************************************ 00:29:05.016 03:25:46 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:29:05.016 03:25:46 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:29:05.016 03:25:46 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:05.016 03:25:46 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:05.016 03:25:46 -- common/autotest_common.sh@10 -- # set +x 00:29:05.275 ************************************ 00:29:05.275 START TEST spdkcli_tcp 00:29:05.275 ************************************ 00:29:05.275 03:25:46 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:29:05.275 * Looking for test storage... 00:29:05.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:05.275 03:25:46 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:05.275 03:25:46 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:05.275 03:25:46 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:05.275 03:25:46 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:29:05.275 03:25:46 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:29:05.275 03:25:46 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:05.275 03:25:46 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:29:05.275 03:25:46 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:29:05.275 03:25:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:05.275 03:25:46 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1978998 00:29:05.275 03:25:46 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1978998 00:29:05.275 03:25:46 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 1978998 ']' 00:29:05.275 03:25:46 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.275 03:25:46 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:05.275 03:25:46 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.275 03:25:46 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:05.275 03:25:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:05.275 03:25:46 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:29:05.275 [2024-06-11 03:25:46.567476] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:05.275 [2024-06-11 03:25:46.567518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978998 ] 00:29:05.275 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.275 [2024-06-11 03:25:46.626909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:05.275 [2024-06-11 03:25:46.668340] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.275 [2024-06-11 03:25:46.668344] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.535 03:25:46 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:05.535 03:25:46 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:29:05.535 03:25:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1979007 00:29:05.535 03:25:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:29:05.535 03:25:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:29:05.794 [ 00:29:05.794 "bdev_malloc_delete", 00:29:05.794 "bdev_malloc_create", 00:29:05.794 "bdev_null_resize", 00:29:05.794 "bdev_null_delete", 00:29:05.794 "bdev_null_create", 00:29:05.794 "bdev_nvme_cuse_unregister", 00:29:05.794 "bdev_nvme_cuse_register", 00:29:05.794 "bdev_opal_new_user", 00:29:05.794 "bdev_opal_set_lock_state", 00:29:05.794 "bdev_opal_delete", 00:29:05.794 "bdev_opal_get_info", 00:29:05.794 "bdev_opal_create", 00:29:05.794 "bdev_nvme_opal_revert", 00:29:05.794 "bdev_nvme_opal_init", 00:29:05.794 "bdev_nvme_send_cmd", 00:29:05.794 "bdev_nvme_get_path_iostat", 00:29:05.794 "bdev_nvme_get_mdns_discovery_info", 00:29:05.794 "bdev_nvme_stop_mdns_discovery", 00:29:05.794 "bdev_nvme_start_mdns_discovery", 00:29:05.794 "bdev_nvme_set_multipath_policy", 00:29:05.794 "bdev_nvme_set_preferred_path", 00:29:05.794 "bdev_nvme_get_io_paths", 00:29:05.794 "bdev_nvme_remove_error_injection", 00:29:05.794 "bdev_nvme_add_error_injection", 00:29:05.794 "bdev_nvme_get_discovery_info", 00:29:05.794 "bdev_nvme_stop_discovery", 00:29:05.794 "bdev_nvme_start_discovery", 00:29:05.794 "bdev_nvme_get_controller_health_info", 00:29:05.794 "bdev_nvme_disable_controller", 00:29:05.794 "bdev_nvme_enable_controller", 00:29:05.794 "bdev_nvme_reset_controller", 00:29:05.794 "bdev_nvme_get_transport_statistics", 00:29:05.794 "bdev_nvme_apply_firmware", 00:29:05.794 "bdev_nvme_detach_controller", 00:29:05.794 "bdev_nvme_get_controllers", 00:29:05.794 "bdev_nvme_attach_controller", 00:29:05.794 "bdev_nvme_set_hotplug", 00:29:05.794 "bdev_nvme_set_options", 00:29:05.794 "bdev_passthru_delete", 00:29:05.794 "bdev_passthru_create", 00:29:05.794 "bdev_lvol_set_parent_bdev", 00:29:05.794 "bdev_lvol_set_parent", 00:29:05.794 "bdev_lvol_check_shallow_copy", 00:29:05.794 "bdev_lvol_start_shallow_copy", 00:29:05.794 "bdev_lvol_grow_lvstore", 00:29:05.794 "bdev_lvol_get_lvols", 00:29:05.794 "bdev_lvol_get_lvstores", 00:29:05.794 "bdev_lvol_delete", 00:29:05.794 "bdev_lvol_set_read_only", 00:29:05.794 "bdev_lvol_resize", 00:29:05.794 "bdev_lvol_decouple_parent", 00:29:05.794 "bdev_lvol_inflate", 00:29:05.794 "bdev_lvol_rename", 00:29:05.794 "bdev_lvol_clone_bdev", 00:29:05.794 "bdev_lvol_clone", 00:29:05.794 "bdev_lvol_snapshot", 00:29:05.794 "bdev_lvol_create", 00:29:05.794 "bdev_lvol_delete_lvstore", 00:29:05.794 "bdev_lvol_rename_lvstore", 00:29:05.794 "bdev_lvol_create_lvstore", 00:29:05.794 "bdev_raid_set_options", 00:29:05.794 "bdev_raid_remove_base_bdev", 00:29:05.794 "bdev_raid_add_base_bdev", 00:29:05.794 "bdev_raid_delete", 00:29:05.794 "bdev_raid_create", 00:29:05.794 "bdev_raid_get_bdevs", 00:29:05.794 "bdev_error_inject_error", 00:29:05.794 "bdev_error_delete", 00:29:05.794 "bdev_error_create", 00:29:05.794 "bdev_split_delete", 00:29:05.794 "bdev_split_create", 00:29:05.794 "bdev_delay_delete", 00:29:05.794 "bdev_delay_create", 00:29:05.794 "bdev_delay_update_latency", 00:29:05.794 "bdev_zone_block_delete", 00:29:05.794 "bdev_zone_block_create", 00:29:05.794 "blobfs_create", 00:29:05.794 "blobfs_detect", 00:29:05.794 "blobfs_set_cache_size", 00:29:05.794 "bdev_aio_delete", 00:29:05.794 "bdev_aio_rescan", 00:29:05.795 "bdev_aio_create", 00:29:05.795 "bdev_ftl_set_property", 00:29:05.795 "bdev_ftl_get_properties", 00:29:05.795 "bdev_ftl_get_stats", 00:29:05.795 "bdev_ftl_unmap", 00:29:05.795 "bdev_ftl_unload", 00:29:05.795 "bdev_ftl_delete", 00:29:05.795 "bdev_ftl_load", 00:29:05.795 "bdev_ftl_create", 00:29:05.795 "bdev_virtio_attach_controller", 00:29:05.795 "bdev_virtio_scsi_get_devices", 00:29:05.795 "bdev_virtio_detach_controller", 00:29:05.795 "bdev_virtio_blk_set_hotplug", 00:29:05.795 "bdev_iscsi_delete", 00:29:05.795 "bdev_iscsi_create", 00:29:05.795 "bdev_iscsi_set_options", 00:29:05.795 "accel_error_inject_error", 00:29:05.795 "ioat_scan_accel_module", 00:29:05.795 "dsa_scan_accel_module", 00:29:05.795 "iaa_scan_accel_module", 00:29:05.795 "vfu_virtio_create_scsi_endpoint", 00:29:05.795 "vfu_virtio_scsi_remove_target", 00:29:05.795 "vfu_virtio_scsi_add_target", 00:29:05.795 "vfu_virtio_create_blk_endpoint", 00:29:05.795 "vfu_virtio_delete_endpoint", 00:29:05.795 "keyring_file_remove_key", 00:29:05.795 "keyring_file_add_key", 00:29:05.795 "keyring_linux_set_options", 00:29:05.795 "iscsi_get_histogram", 00:29:05.795 "iscsi_enable_histogram", 00:29:05.795 "iscsi_set_options", 00:29:05.795 "iscsi_get_auth_groups", 00:29:05.795 "iscsi_auth_group_remove_secret", 00:29:05.795 "iscsi_auth_group_add_secret", 00:29:05.795 "iscsi_delete_auth_group", 00:29:05.795 "iscsi_create_auth_group", 00:29:05.795 "iscsi_set_discovery_auth", 00:29:05.795 "iscsi_get_options", 00:29:05.795 "iscsi_target_node_request_logout", 00:29:05.795 "iscsi_target_node_set_redirect", 00:29:05.795 "iscsi_target_node_set_auth", 00:29:05.795 "iscsi_target_node_add_lun", 00:29:05.795 "iscsi_get_stats", 00:29:05.795 "iscsi_get_connections", 00:29:05.795 "iscsi_portal_group_set_auth", 00:29:05.795 "iscsi_start_portal_group", 00:29:05.795 "iscsi_delete_portal_group", 00:29:05.795 "iscsi_create_portal_group", 00:29:05.795 "iscsi_get_portal_groups", 00:29:05.795 "iscsi_delete_target_node", 00:29:05.795 "iscsi_target_node_remove_pg_ig_maps", 00:29:05.795 "iscsi_target_node_add_pg_ig_maps", 00:29:05.795 "iscsi_create_target_node", 00:29:05.795 "iscsi_get_target_nodes", 00:29:05.795 "iscsi_delete_initiator_group", 00:29:05.795 "iscsi_initiator_group_remove_initiators", 00:29:05.795 "iscsi_initiator_group_add_initiators", 00:29:05.795 "iscsi_create_initiator_group", 00:29:05.795 "iscsi_get_initiator_groups", 00:29:05.795 "nvmf_set_crdt", 00:29:05.795 "nvmf_set_config", 00:29:05.795 "nvmf_set_max_subsystems", 00:29:05.795 "nvmf_stop_mdns_prr", 00:29:05.795 "nvmf_publish_mdns_prr", 00:29:05.795 "nvmf_subsystem_get_listeners", 00:29:05.795 "nvmf_subsystem_get_qpairs", 00:29:05.795 "nvmf_subsystem_get_controllers", 00:29:05.795 "nvmf_get_stats", 00:29:05.795 "nvmf_get_transports", 00:29:05.795 "nvmf_create_transport", 00:29:05.795 "nvmf_get_targets", 00:29:05.795 "nvmf_delete_target", 00:29:05.795 "nvmf_create_target", 00:29:05.795 "nvmf_subsystem_allow_any_host", 00:29:05.795 "nvmf_subsystem_remove_host", 00:29:05.795 "nvmf_subsystem_add_host", 00:29:05.795 "nvmf_ns_remove_host", 00:29:05.795 "nvmf_ns_add_host", 00:29:05.795 "nvmf_subsystem_remove_ns", 00:29:05.795 "nvmf_subsystem_add_ns", 00:29:05.795 "nvmf_subsystem_listener_set_ana_state", 00:29:05.795 "nvmf_discovery_get_referrals", 00:29:05.795 "nvmf_discovery_remove_referral", 00:29:05.795 "nvmf_discovery_add_referral", 00:29:05.795 "nvmf_subsystem_remove_listener", 00:29:05.795 "nvmf_subsystem_add_listener", 00:29:05.795 "nvmf_delete_subsystem", 00:29:05.795 "nvmf_create_subsystem", 00:29:05.795 "nvmf_get_subsystems", 00:29:05.795 "env_dpdk_get_mem_stats", 00:29:05.795 "nbd_get_disks", 00:29:05.795 "nbd_stop_disk", 00:29:05.795 "nbd_start_disk", 00:29:05.795 "ublk_recover_disk", 00:29:05.795 "ublk_get_disks", 00:29:05.795 "ublk_stop_disk", 00:29:05.795 "ublk_start_disk", 00:29:05.795 "ublk_destroy_target", 00:29:05.795 "ublk_create_target", 00:29:05.795 "virtio_blk_create_transport", 00:29:05.795 "virtio_blk_get_transports", 00:29:05.795 "vhost_controller_set_coalescing", 00:29:05.795 "vhost_get_controllers", 00:29:05.795 "vhost_delete_controller", 00:29:05.795 "vhost_create_blk_controller", 00:29:05.795 "vhost_scsi_controller_remove_target", 00:29:05.795 "vhost_scsi_controller_add_target", 00:29:05.795 "vhost_start_scsi_controller", 00:29:05.795 "vhost_create_scsi_controller", 00:29:05.795 "thread_set_cpumask", 00:29:05.795 "framework_get_scheduler", 00:29:05.795 "framework_set_scheduler", 00:29:05.795 "framework_get_reactors", 00:29:05.795 "thread_get_io_channels", 00:29:05.795 "thread_get_pollers", 00:29:05.795 "thread_get_stats", 00:29:05.795 "framework_monitor_context_switch", 00:29:05.795 "spdk_kill_instance", 00:29:05.795 "log_enable_timestamps", 00:29:05.795 "log_get_flags", 00:29:05.795 "log_clear_flag", 00:29:05.795 "log_set_flag", 00:29:05.795 "log_get_level", 00:29:05.795 "log_set_level", 00:29:05.795 "log_get_print_level", 00:29:05.795 "log_set_print_level", 00:29:05.795 "framework_enable_cpumask_locks", 00:29:05.795 "framework_disable_cpumask_locks", 00:29:05.795 "framework_wait_init", 00:29:05.795 "framework_start_init", 00:29:05.795 "scsi_get_devices", 00:29:05.795 "bdev_get_histogram", 00:29:05.795 "bdev_enable_histogram", 00:29:05.795 "bdev_set_qos_limit", 00:29:05.795 "bdev_set_qd_sampling_period", 00:29:05.795 "bdev_get_bdevs", 00:29:05.795 "bdev_reset_iostat", 00:29:05.795 "bdev_get_iostat", 00:29:05.795 "bdev_examine", 00:29:05.795 "bdev_wait_for_examine", 00:29:05.795 "bdev_set_options", 00:29:05.795 "notify_get_notifications", 00:29:05.795 "notify_get_types", 00:29:05.795 "accel_get_stats", 00:29:05.795 "accel_set_options", 00:29:05.795 "accel_set_driver", 00:29:05.795 "accel_crypto_key_destroy", 00:29:05.795 "accel_crypto_keys_get", 00:29:05.795 "accel_crypto_key_create", 00:29:05.795 "accel_assign_opc", 00:29:05.795 "accel_get_module_info", 00:29:05.795 "accel_get_opc_assignments", 00:29:05.795 "vmd_rescan", 00:29:05.795 "vmd_remove_device", 00:29:05.795 "vmd_enable", 00:29:05.795 "sock_get_default_impl", 00:29:05.795 "sock_set_default_impl", 00:29:05.795 "sock_impl_set_options", 00:29:05.795 "sock_impl_get_options", 00:29:05.795 "iobuf_get_stats", 00:29:05.795 "iobuf_set_options", 00:29:05.795 "keyring_get_keys", 00:29:05.795 "framework_get_pci_devices", 00:29:05.795 "framework_get_config", 00:29:05.795 "framework_get_subsystems", 00:29:05.795 "vfu_tgt_set_base_path", 00:29:05.795 "trace_get_info", 00:29:05.795 "trace_get_tpoint_group_mask", 00:29:05.795 "trace_disable_tpoint_group", 00:29:05.795 "trace_enable_tpoint_group", 00:29:05.795 "trace_clear_tpoint_mask", 00:29:05.795 "trace_set_tpoint_mask", 00:29:05.795 "spdk_get_version", 00:29:05.795 "rpc_get_methods" 00:29:05.795 ] 00:29:05.795 03:25:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:29:05.795 03:25:47 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:29:05.795 03:25:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:05.795 03:25:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:29:05.795 03:25:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1978998 00:29:05.795 03:25:47 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 1978998 ']' 00:29:05.795 03:25:47 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 1978998 00:29:05.795 03:25:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:29:05.795 03:25:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:05.795 03:25:47 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1978998 00:29:05.795 03:25:47 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:05.795 03:25:47 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:05.795 03:25:47 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1978998' 00:29:05.795 killing process with pid 1978998 00:29:05.795 03:25:47 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 1978998 00:29:05.795 03:25:47 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 1978998 00:29:06.055 00:29:06.055 real 0m0.959s 00:29:06.055 user 0m1.620s 00:29:06.055 sys 0m0.389s 00:29:06.055 03:25:47 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:06.055 03:25:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:06.055 ************************************ 00:29:06.055 END TEST spdkcli_tcp 00:29:06.055 ************************************ 00:29:06.055 03:25:47 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:29:06.055 03:25:47 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:06.055 03:25:47 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:06.055 03:25:47 -- common/autotest_common.sh@10 -- # set +x 00:29:06.055 ************************************ 00:29:06.055 START TEST dpdk_mem_utility 00:29:06.055 ************************************ 00:29:06.055 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:29:06.314 * Looking for test storage... 00:29:06.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:29:06.314 03:25:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:29:06.314 03:25:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1979292 00:29:06.314 03:25:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1979292 00:29:06.314 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 1979292 ']' 00:29:06.314 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.314 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:06.314 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.314 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:06.314 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:29:06.314 03:25:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:06.314 [2024-06-11 03:25:47.587618] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:06.314 [2024-06-11 03:25:47.587660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979292 ] 00:29:06.314 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.314 [2024-06-11 03:25:47.645662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.314 [2024-06-11 03:25:47.686079] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.574 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:06.574 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:29:06.574 03:25:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:29:06.574 03:25:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:29:06.574 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:06.574 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:29:06.574 { 00:29:06.574 "filename": "/tmp/spdk_mem_dump.txt" 00:29:06.574 } 00:29:06.574 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:06.574 03:25:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:29:06.574 DPDK memory size 814.000000 MiB in 1 heap(s) 00:29:06.574 1 heaps totaling size 814.000000 MiB 00:29:06.574 size: 814.000000 MiB heap id: 0 00:29:06.574 end heaps---------- 00:29:06.574 8 mempools totaling size 598.116089 MiB 00:29:06.574 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:29:06.574 size: 158.602051 MiB name: PDU_data_out_Pool 00:29:06.574 size: 84.521057 MiB name: bdev_io_1979292 00:29:06.574 size: 51.011292 MiB name: evtpool_1979292 00:29:06.574 size: 50.003479 MiB name: msgpool_1979292 00:29:06.574 size: 21.763794 MiB name: PDU_Pool 00:29:06.574 size: 19.513306 MiB name: SCSI_TASK_Pool 00:29:06.574 size: 0.026123 MiB name: Session_Pool 00:29:06.574 end mempools------- 00:29:06.574 6 memzones totaling size 4.142822 MiB 00:29:06.574 size: 1.000366 MiB name: RG_ring_0_1979292 00:29:06.574 size: 1.000366 MiB name: RG_ring_1_1979292 00:29:06.574 size: 1.000366 MiB name: RG_ring_4_1979292 00:29:06.574 size: 1.000366 MiB name: RG_ring_5_1979292 00:29:06.574 size: 0.125366 MiB name: RG_ring_2_1979292 00:29:06.574 size: 0.015991 MiB name: RG_ring_3_1979292 00:29:06.574 end memzones------- 00:29:06.574 03:25:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:29:06.574 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:29:06.574 list of free elements. size: 12.519348 MiB 00:29:06.574 element at address: 0x200000400000 with size: 1.999512 MiB 00:29:06.574 element at address: 0x200018e00000 with size: 0.999878 MiB 00:29:06.574 element at address: 0x200019000000 with size: 0.999878 MiB 00:29:06.574 element at address: 0x200003e00000 with size: 0.996277 MiB 00:29:06.574 element at address: 0x200031c00000 with size: 0.994446 MiB 00:29:06.574 element at address: 0x200013800000 with size: 0.978699 MiB 00:29:06.574 element at address: 0x200007000000 with size: 0.959839 MiB 00:29:06.574 element at address: 0x200019200000 with size: 0.936584 MiB 00:29:06.574 element at address: 0x200000200000 with size: 0.841614 MiB 00:29:06.574 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:29:06.574 element at address: 0x20000b200000 with size: 0.490723 MiB 00:29:06.574 element at address: 0x200000800000 with size: 0.487793 MiB 00:29:06.574 element at address: 0x200019400000 with size: 0.485657 MiB 00:29:06.574 element at address: 0x200027e00000 with size: 0.410034 MiB 00:29:06.574 element at address: 0x200003a00000 with size: 0.355530 MiB 00:29:06.574 list of standard malloc elements. size: 199.218079 MiB 00:29:06.574 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:29:06.574 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:29:06.574 element at address: 0x200018efff80 with size: 1.000122 MiB 00:29:06.574 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:29:06.574 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:29:06.574 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:29:06.574 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:29:06.574 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:29:06.574 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:29:06.574 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:29:06.574 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:29:06.574 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:29:06.574 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:29:06.574 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:29:06.574 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:29:06.574 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:29:06.574 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:29:06.574 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:29:06.574 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:29:06.574 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:29:06.574 element at address: 0x200003adb300 with size: 0.000183 MiB 00:29:06.574 element at address: 0x200003adb500 with size: 0.000183 MiB 00:29:06.574 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:29:06.574 element at address: 0x200003affa80 with size: 0.000183 MiB 00:29:06.574 element at address: 0x200003affb40 with size: 0.000183 MiB 00:29:06.574 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:29:06.574 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:29:06.574 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:29:06.574 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:29:06.574 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:29:06.574 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:29:06.574 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:29:06.574 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:29:06.574 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:29:06.574 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:29:06.574 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:29:06.574 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:29:06.574 element at address: 0x200027e69040 with size: 0.000183 MiB 00:29:06.574 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:29:06.574 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:29:06.574 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:29:06.574 list of memzone associated elements. size: 602.262573 MiB 00:29:06.574 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:29:06.574 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:29:06.574 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:29:06.574 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:29:06.574 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:29:06.574 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1979292_0 00:29:06.574 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:29:06.574 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1979292_0 00:29:06.574 element at address: 0x200003fff380 with size: 48.003052 MiB 00:29:06.574 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1979292_0 00:29:06.574 element at address: 0x2000195be940 with size: 20.255554 MiB 00:29:06.574 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:29:06.574 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:29:06.574 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:29:06.574 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:29:06.574 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1979292 00:29:06.574 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:29:06.574 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1979292 00:29:06.574 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:29:06.574 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1979292 00:29:06.574 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:29:06.574 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:29:06.574 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:29:06.575 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:29:06.575 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:29:06.575 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:29:06.575 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:29:06.575 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:29:06.575 element at address: 0x200003eff180 with size: 1.000488 MiB 00:29:06.575 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1979292 00:29:06.575 element at address: 0x200003affc00 with size: 1.000488 MiB 00:29:06.575 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1979292 00:29:06.575 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:29:06.575 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1979292 00:29:06.575 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:29:06.575 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1979292 00:29:06.575 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:29:06.575 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1979292 00:29:06.575 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:29:06.575 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:29:06.575 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:29:06.575 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:29:06.575 element at address: 0x20001947c540 with size: 0.250488 MiB 00:29:06.575 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:29:06.575 element at address: 0x200003adf880 with size: 0.125488 MiB 00:29:06.575 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1979292 00:29:06.575 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:29:06.575 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:29:06.575 element at address: 0x200027e69100 with size: 0.023743 MiB 00:29:06.575 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:29:06.575 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:29:06.575 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1979292 00:29:06.575 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:29:06.575 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:29:06.575 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:29:06.575 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1979292 00:29:06.575 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:29:06.575 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1979292 00:29:06.575 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:29:06.575 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:29:06.575 03:25:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:29:06.575 03:25:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1979292 00:29:06.575 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 1979292 ']' 00:29:06.575 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 1979292 00:29:06.575 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:29:06.575 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:06.575 03:25:47 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1979292 00:29:06.834 03:25:48 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:06.834 03:25:48 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:06.834 03:25:48 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1979292' 00:29:06.834 killing process with pid 1979292 00:29:06.834 03:25:48 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 1979292 00:29:06.834 03:25:48 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 1979292 00:29:07.093 00:29:07.093 real 0m0.854s 00:29:07.093 user 0m0.797s 00:29:07.093 sys 0m0.355s 00:29:07.093 03:25:48 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:07.093 03:25:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:29:07.093 ************************************ 00:29:07.093 END TEST dpdk_mem_utility 00:29:07.093 ************************************ 00:29:07.093 03:25:48 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:29:07.093 03:25:48 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:07.093 03:25:48 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:07.093 03:25:48 -- common/autotest_common.sh@10 -- # set +x 00:29:07.093 ************************************ 00:29:07.093 START TEST event 00:29:07.093 ************************************ 00:29:07.093 03:25:48 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:29:07.093 * Looking for test storage... 00:29:07.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:29:07.093 03:25:48 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:29:07.093 03:25:48 event -- bdev/nbd_common.sh@6 -- # set -e 00:29:07.093 03:25:48 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:29:07.093 03:25:48 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:29:07.093 03:25:48 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:07.093 03:25:48 event -- common/autotest_common.sh@10 -- # set +x 00:29:07.093 ************************************ 00:29:07.093 START TEST event_perf 00:29:07.093 ************************************ 00:29:07.093 03:25:48 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:29:07.352 Running I/O for 1 seconds...[2024-06-11 03:25:48.511858] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:07.352 [2024-06-11 03:25:48.511927] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979378 ] 00:29:07.352 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.352 [2024-06-11 03:25:48.576037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:07.352 [2024-06-11 03:25:48.618874] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.352 [2024-06-11 03:25:48.618972] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:07.352 [2024-06-11 03:25:48.619241] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:07.352 [2024-06-11 03:25:48.619244] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.289 Running I/O for 1 seconds... 00:29:08.289 lcore 0: 214727 00:29:08.289 lcore 1: 214727 00:29:08.289 lcore 2: 214725 00:29:08.289 lcore 3: 214726 00:29:08.289 done. 00:29:08.289 00:29:08.289 real 0m1.189s 00:29:08.289 user 0m4.100s 00:29:08.289 sys 0m0.086s 00:29:08.289 03:25:49 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:08.289 03:25:49 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:29:08.289 ************************************ 00:29:08.289 END TEST event_perf 00:29:08.289 ************************************ 00:29:08.548 03:25:49 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:29:08.548 03:25:49 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:29:08.548 03:25:49 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:08.548 03:25:49 event -- common/autotest_common.sh@10 -- # set +x 00:29:08.548 ************************************ 00:29:08.548 START TEST event_reactor 00:29:08.548 ************************************ 00:29:08.548 03:25:49 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:29:08.548 [2024-06-11 03:25:49.765898] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:08.548 [2024-06-11 03:25:49.765969] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979615 ] 00:29:08.548 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.548 [2024-06-11 03:25:49.829884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.548 [2024-06-11 03:25:49.869490] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.925 test_start 00:29:09.925 oneshot 00:29:09.925 tick 100 00:29:09.925 tick 100 00:29:09.925 tick 250 00:29:09.925 tick 100 00:29:09.925 tick 100 00:29:09.925 tick 250 00:29:09.925 tick 100 00:29:09.925 tick 500 00:29:09.925 tick 100 00:29:09.925 tick 100 00:29:09.925 tick 250 00:29:09.925 tick 100 00:29:09.925 tick 100 00:29:09.925 test_end 00:29:09.925 00:29:09.925 real 0m1.187s 00:29:09.925 user 0m1.100s 00:29:09.925 sys 0m0.082s 00:29:09.925 03:25:50 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:09.925 03:25:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:29:09.925 ************************************ 00:29:09.925 END TEST event_reactor 00:29:09.925 ************************************ 00:29:09.925 03:25:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:29:09.925 03:25:50 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:29:09.925 03:25:50 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:09.925 03:25:50 event -- common/autotest_common.sh@10 -- # set +x 00:29:09.925 ************************************ 00:29:09.925 START TEST event_reactor_perf 00:29:09.925 ************************************ 00:29:09.925 03:25:50 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:29:09.926 [2024-06-11 03:25:51.018214] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:09.926 [2024-06-11 03:25:51.018284] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1979861 ] 00:29:09.926 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.926 [2024-06-11 03:25:51.081980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.926 [2024-06-11 03:25:51.119396] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.862 test_start 00:29:10.862 test_end 00:29:10.862 Performance: 510175 events per second 00:29:10.862 00:29:10.862 real 0m1.182s 00:29:10.862 user 0m1.090s 00:29:10.862 sys 0m0.088s 00:29:10.862 03:25:52 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:10.862 03:25:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:29:10.862 ************************************ 00:29:10.862 END TEST event_reactor_perf 00:29:10.862 ************************************ 00:29:10.862 03:25:52 event -- event/event.sh@49 -- # uname -s 00:29:10.862 03:25:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:29:10.862 03:25:52 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:29:10.862 03:25:52 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:10.862 03:25:52 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:10.862 03:25:52 event -- common/autotest_common.sh@10 -- # set +x 00:29:10.862 ************************************ 00:29:10.862 START TEST event_scheduler 00:29:10.862 ************************************ 00:29:10.862 03:25:52 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:29:11.121 * Looking for test storage... 00:29:11.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:29:11.121 03:25:52 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:29:11.121 03:25:52 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1980141 00:29:11.121 03:25:52 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:29:11.121 03:25:52 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:29:11.121 03:25:52 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1980141 00:29:11.121 03:25:52 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 1980141 ']' 00:29:11.121 03:25:52 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.121 03:25:52 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:11.121 03:25:52 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.121 03:25:52 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:11.121 03:25:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:11.121 [2024-06-11 03:25:52.387224] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:11.121 [2024-06-11 03:25:52.387266] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1980141 ] 00:29:11.121 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.121 [2024-06-11 03:25:52.441512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:11.121 [2024-06-11 03:25:52.484452] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.121 [2024-06-11 03:25:52.484539] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.121 [2024-06-11 03:25:52.484644] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:11.121 [2024-06-11 03:25:52.484645] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.121 03:25:52 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:11.121 03:25:52 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:29:11.121 03:25:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:29:11.121 03:25:52 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.121 03:25:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:11.380 POWER: Env isn't set yet! 00:29:11.380 POWER: Attempting to initialise ACPI cpufreq power management... 00:29:11.380 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:29:11.380 POWER: Cannot set governor of lcore 0 to userspace 00:29:11.380 POWER: Attempting to initialise PSTAT power management... 00:29:11.380 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:29:11.380 POWER: Initialized successfully for lcore 0 power management 00:29:11.380 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:29:11.380 POWER: Initialized successfully for lcore 1 power management 00:29:11.380 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:29:11.380 POWER: Initialized successfully for lcore 2 power management 00:29:11.380 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:29:11.380 POWER: Initialized successfully for lcore 3 power management 00:29:11.380 [2024-06-11 03:25:52.566240] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:29:11.380 [2024-06-11 03:25:52.566255] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:29:11.380 [2024-06-11 03:25:52.566267] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:29:11.380 03:25:52 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.380 03:25:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:29:11.380 03:25:52 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.380 03:25:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:11.380 [2024-06-11 03:25:52.629628] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:29:11.380 03:25:52 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.380 03:25:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:29:11.380 03:25:52 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:11.380 03:25:52 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:11.380 03:25:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:11.380 ************************************ 00:29:11.380 START TEST scheduler_create_thread 00:29:11.380 ************************************ 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:11.380 2 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:11.380 3 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:11.380 4 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:11.380 5 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:11.380 6 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:11.380 7 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:11.380 8 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:11.380 9 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.380 03:25:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:29:11.381 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.381 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:11.381 10 00:29:11.381 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.381 03:25:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:29:11.381 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.381 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:11.381 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.381 03:25:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:29:11.381 03:25:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:29:11.381 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.381 03:25:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:12.316 03:25:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:12.316 03:25:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:29:12.316 03:25:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:12.316 03:25:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:13.691 03:25:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.691 03:25:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:29:13.691 03:25:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:29:13.691 03:25:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.691 03:25:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:15.067 03:25:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:15.067 00:29:15.067 real 0m3.381s 00:29:15.067 user 0m0.020s 00:29:15.067 sys 0m0.008s 00:29:15.067 03:25:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:15.067 03:25:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:29:15.067 ************************************ 00:29:15.067 END TEST scheduler_create_thread 00:29:15.067 ************************************ 00:29:15.067 03:25:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:29:15.067 03:25:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1980141 00:29:15.067 03:25:56 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 1980141 ']' 00:29:15.067 03:25:56 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 1980141 00:29:15.067 03:25:56 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:29:15.067 03:25:56 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:15.067 03:25:56 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1980141 00:29:15.067 03:25:56 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:29:15.067 03:25:56 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:29:15.067 03:25:56 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1980141' 00:29:15.067 killing process with pid 1980141 00:29:15.067 03:25:56 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 1980141 00:29:15.067 03:25:56 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 1980141 00:29:15.067 [2024-06-11 03:25:56.425550] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:29:15.326 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:29:15.326 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:29:15.326 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:29:15.326 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:29:15.326 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:29:15.326 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:29:15.326 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:29:15.326 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:29:15.326 00:29:15.326 real 0m4.393s 00:29:15.326 user 0m7.769s 00:29:15.326 sys 0m0.322s 00:29:15.326 03:25:56 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:15.327 03:25:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:29:15.327 ************************************ 00:29:15.327 END TEST event_scheduler 00:29:15.327 ************************************ 00:29:15.327 03:25:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:29:15.327 03:25:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:29:15.327 03:25:56 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:15.327 03:25:56 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:15.327 03:25:56 event -- common/autotest_common.sh@10 -- # set +x 00:29:15.327 ************************************ 00:29:15.327 START TEST app_repeat 00:29:15.327 ************************************ 00:29:15.327 03:25:56 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:29:15.327 03:25:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:15.327 03:25:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:15.327 03:25:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:29:15.327 03:25:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:15.327 03:25:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:29:15.327 03:25:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:29:15.327 03:25:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:29:15.327 03:25:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1980896 00:29:15.327 03:25:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:29:15.327 03:25:56 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:29:15.327 03:25:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1980896' 00:29:15.327 Process app_repeat pid: 1980896 00:29:15.327 03:25:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:29:15.327 03:25:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:29:15.327 spdk_app_start Round 0 00:29:15.327 03:25:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1980896 /var/tmp/spdk-nbd.sock 00:29:15.327 03:25:56 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1980896 ']' 00:29:15.327 03:25:56 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:15.327 03:25:56 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:15.327 03:25:56 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:15.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:15.327 03:25:56 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:15.327 03:25:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:29:15.586 [2024-06-11 03:25:56.754581] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:15.586 [2024-06-11 03:25:56.754648] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1980896 ] 00:29:15.586 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.586 [2024-06-11 03:25:56.818417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:15.586 [2024-06-11 03:25:56.858458] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.586 [2024-06-11 03:25:56.858460] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.586 03:25:56 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:15.586 03:25:56 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:29:15.586 03:25:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:29:15.845 Malloc0 00:29:15.845 03:25:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:29:16.104 Malloc1 00:29:16.104 03:25:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:29:16.104 /dev/nbd0 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:16.104 03:25:57 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:29:16.104 03:25:57 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:29:16.104 03:25:57 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:16.104 03:25:57 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:16.104 03:25:57 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:29:16.104 03:25:57 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:29:16.104 03:25:57 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:29:16.104 03:25:57 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:29:16.104 03:25:57 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:29:16.104 1+0 records in 00:29:16.104 1+0 records out 00:29:16.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192707 s, 21.3 MB/s 00:29:16.104 03:25:57 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:29:16.104 03:25:57 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:29:16.104 03:25:57 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:29:16.104 03:25:57 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:29:16.104 03:25:57 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:16.104 03:25:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:29:16.363 /dev/nbd1 00:29:16.363 03:25:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:16.363 03:25:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:16.363 03:25:57 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:29:16.363 03:25:57 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:29:16.363 03:25:57 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:16.363 03:25:57 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:16.363 03:25:57 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:29:16.363 03:25:57 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:29:16.363 03:25:57 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:29:16.363 03:25:57 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:29:16.363 03:25:57 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:29:16.363 1+0 records in 00:29:16.363 1+0 records out 00:29:16.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210856 s, 19.4 MB/s 00:29:16.363 03:25:57 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:29:16.363 03:25:57 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:29:16.363 03:25:57 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:29:16.363 03:25:57 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:29:16.363 03:25:57 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:29:16.363 03:25:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:16.363 03:25:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:16.363 03:25:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:16.363 03:25:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:16.363 03:25:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:16.622 { 00:29:16.622 "nbd_device": "/dev/nbd0", 00:29:16.622 "bdev_name": "Malloc0" 00:29:16.622 }, 00:29:16.622 { 00:29:16.622 "nbd_device": "/dev/nbd1", 00:29:16.622 "bdev_name": "Malloc1" 00:29:16.622 } 00:29:16.622 ]' 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:16.622 { 00:29:16.622 "nbd_device": "/dev/nbd0", 00:29:16.622 "bdev_name": "Malloc0" 00:29:16.622 }, 00:29:16.622 { 00:29:16.622 "nbd_device": "/dev/nbd1", 00:29:16.622 "bdev_name": "Malloc1" 00:29:16.622 } 00:29:16.622 ]' 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:16.622 /dev/nbd1' 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:16.622 /dev/nbd1' 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:29:16.622 256+0 records in 00:29:16.622 256+0 records out 00:29:16.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103219 s, 102 MB/s 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:16.622 256+0 records in 00:29:16.622 256+0 records out 00:29:16.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130464 s, 80.4 MB/s 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:16.622 256+0 records in 00:29:16.622 256+0 records out 00:29:16.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139368 s, 75.2 MB/s 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:16.622 03:25:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:16.880 03:25:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:16.880 03:25:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:16.880 03:25:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:16.880 03:25:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:16.881 03:25:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:16.881 03:25:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:16.881 03:25:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:29:16.881 03:25:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:29:16.881 03:25:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:16.881 03:25:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:17.139 03:25:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:17.139 03:25:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:17.139 03:25:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:17.139 03:25:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:17.139 03:25:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:17.139 03:25:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:17.139 03:25:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:29:17.140 03:25:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:29:17.140 03:25:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:17.140 03:25:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:17.140 03:25:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:17.399 03:25:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:17.399 03:25:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:17.399 03:25:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:17.399 03:25:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:17.399 03:25:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:17.399 03:25:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:29:17.399 03:25:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:29:17.399 03:25:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:29:17.399 03:25:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:29:17.399 03:25:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:29:17.399 03:25:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:17.399 03:25:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:29:17.399 03:25:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:29:17.399 03:25:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:29:17.657 [2024-06-11 03:25:58.943567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:17.657 [2024-06-11 03:25:58.980346] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.657 [2024-06-11 03:25:58.980349] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.657 [2024-06-11 03:25:59.020902] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:29:17.657 [2024-06-11 03:25:59.020942] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:29:20.946 03:26:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:29:20.946 03:26:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:29:20.946 spdk_app_start Round 1 00:29:20.946 03:26:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1980896 /var/tmp/spdk-nbd.sock 00:29:20.946 03:26:01 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1980896 ']' 00:29:20.946 03:26:01 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:20.946 03:26:01 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:20.946 03:26:01 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:20.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:20.946 03:26:01 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:20.946 03:26:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:29:20.946 03:26:01 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:20.946 03:26:01 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:29:20.946 03:26:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:29:20.946 Malloc0 00:29:20.946 03:26:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:29:20.946 Malloc1 00:29:20.946 03:26:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:29:20.946 03:26:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:20.946 03:26:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:20.946 03:26:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:20.946 03:26:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:20.946 03:26:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:20.946 03:26:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:29:20.946 03:26:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:20.946 03:26:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:20.946 03:26:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:20.946 03:26:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:20.946 03:26:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:20.946 03:26:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:29:20.946 03:26:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:20.946 03:26:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:20.946 03:26:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:29:21.205 /dev/nbd0 00:29:21.205 03:26:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:21.205 03:26:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:21.205 03:26:02 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:29:21.205 03:26:02 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:29:21.205 03:26:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:21.205 03:26:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:21.205 03:26:02 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:29:21.205 03:26:02 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:29:21.205 03:26:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:29:21.205 03:26:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:29:21.205 03:26:02 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:29:21.205 1+0 records in 00:29:21.205 1+0 records out 00:29:21.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222196 s, 18.4 MB/s 00:29:21.205 03:26:02 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:29:21.205 03:26:02 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:29:21.205 03:26:02 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:29:21.205 03:26:02 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:29:21.205 03:26:02 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:29:21.205 03:26:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:21.205 03:26:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:21.205 03:26:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:29:21.469 /dev/nbd1 00:29:21.469 03:26:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:21.469 03:26:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:21.469 03:26:02 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:29:21.469 03:26:02 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:29:21.469 03:26:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:21.469 03:26:02 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:21.469 03:26:02 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:29:21.469 03:26:02 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:29:21.469 03:26:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:29:21.469 03:26:02 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:29:21.469 03:26:02 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:29:21.469 1+0 records in 00:29:21.469 1+0 records out 00:29:21.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210489 s, 19.5 MB/s 00:29:21.469 03:26:02 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:29:21.469 03:26:02 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:29:21.469 03:26:02 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:29:21.469 03:26:02 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:29:21.469 03:26:02 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:29:21.469 03:26:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:21.469 03:26:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:21.470 03:26:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:21.470 03:26:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:21.470 03:26:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:21.470 03:26:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:21.470 { 00:29:21.470 "nbd_device": "/dev/nbd0", 00:29:21.470 "bdev_name": "Malloc0" 00:29:21.470 }, 00:29:21.470 { 00:29:21.470 "nbd_device": "/dev/nbd1", 00:29:21.470 "bdev_name": "Malloc1" 00:29:21.470 } 00:29:21.470 ]' 00:29:21.470 03:26:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:21.470 { 00:29:21.470 "nbd_device": "/dev/nbd0", 00:29:21.470 "bdev_name": "Malloc0" 00:29:21.470 }, 00:29:21.470 { 00:29:21.470 "nbd_device": "/dev/nbd1", 00:29:21.470 "bdev_name": "Malloc1" 00:29:21.470 } 00:29:21.470 ]' 00:29:21.470 03:26:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:21.731 /dev/nbd1' 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:21.731 /dev/nbd1' 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:29:21.731 256+0 records in 00:29:21.731 256+0 records out 00:29:21.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101715 s, 103 MB/s 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:21.731 256+0 records in 00:29:21.731 256+0 records out 00:29:21.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145463 s, 72.1 MB/s 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:21.731 256+0 records in 00:29:21.731 256+0 records out 00:29:21.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144509 s, 72.6 MB/s 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:21.731 03:26:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:21.990 03:26:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:21.990 03:26:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:21.990 03:26:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:21.990 03:26:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:21.990 03:26:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:21.990 03:26:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:21.990 03:26:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:29:21.990 03:26:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:29:21.990 03:26:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:21.990 03:26:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:21.990 03:26:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:21.991 03:26:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:21.991 03:26:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:21.991 03:26:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:21.991 03:26:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:21.991 03:26:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:21.991 03:26:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:29:21.991 03:26:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:29:21.991 03:26:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:21.991 03:26:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:21.991 03:26:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:22.250 03:26:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:22.250 03:26:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:22.250 03:26:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:22.250 03:26:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:22.250 03:26:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:29:22.250 03:26:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:22.250 03:26:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:29:22.250 03:26:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:29:22.250 03:26:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:29:22.250 03:26:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:29:22.250 03:26:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:22.250 03:26:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:29:22.250 03:26:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:29:22.509 03:26:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:29:22.768 [2024-06-11 03:26:03.946656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:22.768 [2024-06-11 03:26:03.983415] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.768 [2024-06-11 03:26:03.983416] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.768 [2024-06-11 03:26:04.024718] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:29:22.768 [2024-06-11 03:26:04.024758] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:29:26.058 03:26:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:29:26.058 03:26:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:29:26.058 spdk_app_start Round 2 00:29:26.058 03:26:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1980896 /var/tmp/spdk-nbd.sock 00:29:26.058 03:26:06 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1980896 ']' 00:29:26.058 03:26:06 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:26.058 03:26:06 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:26.058 03:26:06 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:26.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:26.058 03:26:06 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:26.058 03:26:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:29:26.058 03:26:06 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:26.058 03:26:06 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:29:26.058 03:26:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:29:26.058 Malloc0 00:29:26.058 03:26:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:29:26.058 Malloc1 00:29:26.058 03:26:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:29:26.058 03:26:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:26.058 03:26:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:26.058 03:26:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:26.058 03:26:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:26.058 03:26:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:26.058 03:26:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:29:26.058 03:26:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:26.058 03:26:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:29:26.058 03:26:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:26.058 03:26:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:26.058 03:26:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:26.058 03:26:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:29:26.058 03:26:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:26.058 03:26:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:26.058 03:26:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:29:26.317 /dev/nbd0 00:29:26.317 03:26:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:26.317 03:26:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:29:26.317 1+0 records in 00:29:26.317 1+0 records out 00:29:26.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184471 s, 22.2 MB/s 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:29:26.317 03:26:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:26.317 03:26:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:26.317 03:26:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:29:26.317 /dev/nbd1 00:29:26.317 03:26:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:26.317 03:26:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:26.317 03:26:07 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:26.577 03:26:07 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:29:26.577 03:26:07 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:29:26.577 03:26:07 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:29:26.577 03:26:07 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:29:26.577 03:26:07 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:29:26.577 1+0 records in 00:29:26.577 1+0 records out 00:29:26.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228182 s, 18.0 MB/s 00:29:26.577 03:26:07 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:29:26.577 03:26:07 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:29:26.577 03:26:07 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:29:26.577 03:26:07 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:29:26.577 03:26:07 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:26.577 { 00:29:26.577 "nbd_device": "/dev/nbd0", 00:29:26.577 "bdev_name": "Malloc0" 00:29:26.577 }, 00:29:26.577 { 00:29:26.577 "nbd_device": "/dev/nbd1", 00:29:26.577 "bdev_name": "Malloc1" 00:29:26.577 } 00:29:26.577 ]' 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:26.577 { 00:29:26.577 "nbd_device": "/dev/nbd0", 00:29:26.577 "bdev_name": "Malloc0" 00:29:26.577 }, 00:29:26.577 { 00:29:26.577 "nbd_device": "/dev/nbd1", 00:29:26.577 "bdev_name": "Malloc1" 00:29:26.577 } 00:29:26.577 ]' 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:26.577 /dev/nbd1' 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:26.577 /dev/nbd1' 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:26.577 03:26:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:29:26.837 256+0 records in 00:29:26.837 256+0 records out 00:29:26.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103282 s, 102 MB/s 00:29:26.837 03:26:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:26.837 03:26:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:26.837 256+0 records in 00:29:26.837 256+0 records out 00:29:26.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135028 s, 77.7 MB/s 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:26.837 256+0 records in 00:29:26.837 256+0 records out 00:29:26.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141143 s, 74.3 MB/s 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:26.837 03:26:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:27.097 03:26:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:27.097 03:26:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:27.097 03:26:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:27.097 03:26:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:27.097 03:26:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:27.097 03:26:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:27.097 03:26:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:29:27.097 03:26:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:29:27.097 03:26:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:27.097 03:26:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:27.097 03:26:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:27.356 03:26:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:27.356 03:26:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:27.356 03:26:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:27.356 03:26:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:27.356 03:26:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:29:27.356 03:26:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:27.356 03:26:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:29:27.356 03:26:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:29:27.356 03:26:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:29:27.356 03:26:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:29:27.356 03:26:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:27.356 03:26:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:29:27.356 03:26:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:29:27.615 03:26:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:29:27.615 [2024-06-11 03:26:08.991539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:27.874 [2024-06-11 03:26:09.028174] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.874 [2024-06-11 03:26:09.028175] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.874 [2024-06-11 03:26:09.068965] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:29:27.874 [2024-06-11 03:26:09.069017] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:29:31.164 03:26:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1980896 /var/tmp/spdk-nbd.sock 00:29:31.164 03:26:11 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1980896 ']' 00:29:31.164 03:26:11 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:31.164 03:26:11 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:31.164 03:26:11 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:31.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:31.164 03:26:11 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:31.164 03:26:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:29:31.164 03:26:11 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:31.164 03:26:11 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:29:31.164 03:26:12 event.app_repeat -- event/event.sh@39 -- # killprocess 1980896 00:29:31.164 03:26:12 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 1980896 ']' 00:29:31.164 03:26:12 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 1980896 00:29:31.164 03:26:12 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:29:31.164 03:26:12 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:31.164 03:26:12 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1980896 00:29:31.164 03:26:12 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:31.164 03:26:12 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:31.164 03:26:12 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1980896' 00:29:31.164 killing process with pid 1980896 00:29:31.164 03:26:12 event.app_repeat -- common/autotest_common.sh@968 -- # kill 1980896 00:29:31.164 03:26:12 event.app_repeat -- common/autotest_common.sh@973 -- # wait 1980896 00:29:31.164 spdk_app_start is called in Round 0. 00:29:31.164 Shutdown signal received, stop current app iteration 00:29:31.164 Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 reinitialization... 00:29:31.164 spdk_app_start is called in Round 1. 00:29:31.164 Shutdown signal received, stop current app iteration 00:29:31.164 Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 reinitialization... 00:29:31.164 spdk_app_start is called in Round 2. 00:29:31.164 Shutdown signal received, stop current app iteration 00:29:31.164 Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 reinitialization... 00:29:31.164 spdk_app_start is called in Round 3. 00:29:31.164 Shutdown signal received, stop current app iteration 00:29:31.164 03:26:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:29:31.164 03:26:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:29:31.164 00:29:31.164 real 0m15.477s 00:29:31.164 user 0m33.607s 00:29:31.164 sys 0m2.287s 00:29:31.165 03:26:12 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:31.165 03:26:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:29:31.165 ************************************ 00:29:31.165 END TEST app_repeat 00:29:31.165 ************************************ 00:29:31.165 03:26:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:29:31.165 03:26:12 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:29:31.165 03:26:12 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:31.165 03:26:12 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:31.165 03:26:12 event -- common/autotest_common.sh@10 -- # set +x 00:29:31.165 ************************************ 00:29:31.165 START TEST cpu_locks 00:29:31.165 ************************************ 00:29:31.165 03:26:12 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:29:31.165 * Looking for test storage... 00:29:31.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:29:31.165 03:26:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:29:31.165 03:26:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:29:31.165 03:26:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:29:31.165 03:26:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:29:31.165 03:26:12 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:31.165 03:26:12 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:31.165 03:26:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:29:31.165 ************************************ 00:29:31.165 START TEST default_locks 00:29:31.165 ************************************ 00:29:31.165 03:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:29:31.165 03:26:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1983856 00:29:31.165 03:26:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1983856 00:29:31.165 03:26:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:29:31.165 03:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 1983856 ']' 00:29:31.165 03:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.165 03:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:31.165 03:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.165 03:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:31.165 03:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:29:31.165 [2024-06-11 03:26:12.428441] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:31.165 [2024-06-11 03:26:12.428480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1983856 ] 00:29:31.165 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.165 [2024-06-11 03:26:12.488675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.165 [2024-06-11 03:26:12.528037] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.424 03:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:31.424 03:26:12 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:29:31.424 03:26:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1983856 00:29:31.424 03:26:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1983856 00:29:31.424 03:26:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:29:31.992 lslocks: write error 00:29:31.992 03:26:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1983856 00:29:31.992 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 1983856 ']' 00:29:31.992 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 1983856 00:29:31.992 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:29:31.992 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:31.992 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1983856 00:29:31.992 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:31.992 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:31.992 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1983856' 00:29:31.992 killing process with pid 1983856 00:29:31.992 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 1983856 00:29:31.992 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 1983856 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1983856 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1983856 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 1983856 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 1983856 ']' 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:29:32.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1983856) - No such process 00:29:32.252 ERROR: process (pid: 1983856) is no longer running 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:29:32.252 00:29:32.252 real 0m1.149s 00:29:32.252 user 0m1.074s 00:29:32.252 sys 0m0.546s 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:32.252 03:26:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:29:32.252 ************************************ 00:29:32.252 END TEST default_locks 00:29:32.252 ************************************ 00:29:32.252 03:26:13 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:29:32.252 03:26:13 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:32.252 03:26:13 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:32.252 03:26:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:29:32.252 ************************************ 00:29:32.252 START TEST default_locks_via_rpc 00:29:32.252 ************************************ 00:29:32.252 03:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:29:32.252 03:26:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1984120 00:29:32.252 03:26:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1984120 00:29:32.252 03:26:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:29:32.252 03:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1984120 ']' 00:29:32.252 03:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.252 03:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:32.252 03:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.252 03:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:32.252 03:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:32.252 [2024-06-11 03:26:13.642774] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:32.252 [2024-06-11 03:26:13.642811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1984120 ] 00:29:32.532 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.532 [2024-06-11 03:26:13.700563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.532 [2024-06-11 03:26:13.741464] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.532 03:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:32.827 03:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:29:32.827 03:26:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:29:32.827 03:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.827 03:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:32.827 03:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.827 03:26:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:29:32.827 03:26:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:29:32.827 03:26:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:29:32.827 03:26:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:29:32.827 03:26:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:29:32.828 03:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.828 03:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:32.828 03:26:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.828 03:26:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1984120 00:29:32.828 03:26:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1984120 00:29:32.828 03:26:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:29:33.113 03:26:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1984120 00:29:33.113 03:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 1984120 ']' 00:29:33.113 03:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 1984120 00:29:33.113 03:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:29:33.113 03:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:33.113 03:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1984120 00:29:33.113 03:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:33.113 03:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:33.113 03:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1984120' 00:29:33.113 killing process with pid 1984120 00:29:33.113 03:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 1984120 00:29:33.113 03:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 1984120 00:29:33.372 00:29:33.372 real 0m0.979s 00:29:33.372 user 0m0.914s 00:29:33.372 sys 0m0.467s 00:29:33.372 03:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:33.372 03:26:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:33.372 ************************************ 00:29:33.372 END TEST default_locks_via_rpc 00:29:33.372 ************************************ 00:29:33.372 03:26:14 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:29:33.372 03:26:14 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:33.372 03:26:14 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:33.372 03:26:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:29:33.372 ************************************ 00:29:33.372 START TEST non_locking_app_on_locked_coremask 00:29:33.372 ************************************ 00:29:33.372 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:29:33.372 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1984190 00:29:33.372 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1984190 /var/tmp/spdk.sock 00:29:33.372 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:29:33.372 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1984190 ']' 00:29:33.372 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.372 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:33.372 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.372 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:33.372 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:33.372 [2024-06-11 03:26:14.687690] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:33.372 [2024-06-11 03:26:14.687729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1984190 ] 00:29:33.372 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.372 [2024-06-11 03:26:14.746864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.632 [2024-06-11 03:26:14.788102] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.632 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:33.632 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:29:33.632 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1984381 00:29:33.632 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1984381 /var/tmp/spdk2.sock 00:29:33.632 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:29:33.632 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1984381 ']' 00:29:33.632 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:29:33.632 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:33.632 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:29:33.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:29:33.632 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:33.632 03:26:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:33.632 [2024-06-11 03:26:15.018234] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:33.632 [2024-06-11 03:26:15.018282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1984381 ] 00:29:33.890 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.891 [2024-06-11 03:26:15.096470] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:29:33.891 [2024-06-11 03:26:15.096491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.891 [2024-06-11 03:26:15.176024] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.458 03:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:34.458 03:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:29:34.458 03:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1984190 00:29:34.458 03:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1984190 00:29:34.458 03:26:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:29:34.717 lslocks: write error 00:29:34.717 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1984190 00:29:34.717 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1984190 ']' 00:29:34.717 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1984190 00:29:34.717 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:29:34.717 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:34.717 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1984190 00:29:34.717 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:34.717 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:34.717 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1984190' 00:29:34.717 killing process with pid 1984190 00:29:34.717 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1984190 00:29:34.717 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1984190 00:29:35.653 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1984381 00:29:35.653 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1984381 ']' 00:29:35.653 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1984381 00:29:35.653 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:29:35.653 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:35.653 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1984381 00:29:35.653 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:35.653 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:35.653 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1984381' 00:29:35.653 killing process with pid 1984381 00:29:35.653 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1984381 00:29:35.653 03:26:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1984381 00:29:35.653 00:29:35.653 real 0m2.407s 00:29:35.653 user 0m2.486s 00:29:35.653 sys 0m0.789s 00:29:35.653 03:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:35.653 03:26:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:35.653 ************************************ 00:29:35.653 END TEST non_locking_app_on_locked_coremask 00:29:35.653 ************************************ 00:29:35.912 03:26:17 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:29:35.912 03:26:17 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:35.912 03:26:17 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:35.912 03:26:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:29:35.912 ************************************ 00:29:35.912 START TEST locking_app_on_unlocked_coremask 00:29:35.912 ************************************ 00:29:35.912 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:29:35.912 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1984657 00:29:35.912 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1984657 /var/tmp/spdk.sock 00:29:35.912 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:29:35.912 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1984657 ']' 00:29:35.912 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.912 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:35.912 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.912 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:35.912 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:35.912 [2024-06-11 03:26:17.163144] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:35.912 [2024-06-11 03:26:17.163181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1984657 ] 00:29:35.912 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.912 [2024-06-11 03:26:17.219895] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:29:35.912 [2024-06-11 03:26:17.219917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.912 [2024-06-11 03:26:17.260525] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.172 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:36.172 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:29:36.172 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1984718 00:29:36.172 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1984718 /var/tmp/spdk2.sock 00:29:36.172 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:29:36.172 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1984718 ']' 00:29:36.172 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:29:36.172 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:36.172 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:29:36.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:29:36.172 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:36.172 03:26:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:36.172 [2024-06-11 03:26:17.488892] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:36.172 [2024-06-11 03:26:17.488939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1984718 ] 00:29:36.172 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.172 [2024-06-11 03:26:17.568907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.431 [2024-06-11 03:26:17.654235] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.999 03:26:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:36.999 03:26:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:29:36.999 03:26:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1984718 00:29:36.999 03:26:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1984718 00:29:36.999 03:26:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:29:37.568 lslocks: write error 00:29:37.568 03:26:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1984657 00:29:37.568 03:26:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1984657 ']' 00:29:37.568 03:26:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 1984657 00:29:37.568 03:26:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:29:37.568 03:26:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:37.568 03:26:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1984657 00:29:37.568 03:26:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:37.568 03:26:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:37.568 03:26:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1984657' 00:29:37.568 killing process with pid 1984657 00:29:37.568 03:26:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 1984657 00:29:37.568 03:26:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 1984657 00:29:38.134 03:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1984718 00:29:38.134 03:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1984718 ']' 00:29:38.135 03:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 1984718 00:29:38.135 03:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:29:38.135 03:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:38.135 03:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1984718 00:29:38.135 03:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:38.135 03:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:38.135 03:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1984718' 00:29:38.135 killing process with pid 1984718 00:29:38.135 03:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 1984718 00:29:38.135 03:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 1984718 00:29:38.394 00:29:38.394 real 0m2.585s 00:29:38.394 user 0m2.671s 00:29:38.394 sys 0m0.860s 00:29:38.394 03:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:38.394 03:26:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:38.394 ************************************ 00:29:38.394 END TEST locking_app_on_unlocked_coremask 00:29:38.394 ************************************ 00:29:38.394 03:26:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:29:38.394 03:26:19 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:38.394 03:26:19 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:38.394 03:26:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:29:38.394 ************************************ 00:29:38.394 START TEST locking_app_on_locked_coremask 00:29:38.394 ************************************ 00:29:38.394 03:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:29:38.394 03:26:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1985158 00:29:38.394 03:26:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1985158 /var/tmp/spdk.sock 00:29:38.394 03:26:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:29:38.394 03:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1985158 ']' 00:29:38.394 03:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.394 03:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:38.394 03:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.394 03:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:38.394 03:26:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:38.653 [2024-06-11 03:26:19.814184] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:38.654 [2024-06-11 03:26:19.814228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1985158 ] 00:29:38.654 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.654 [2024-06-11 03:26:19.872699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.654 [2024-06-11 03:26:19.908988] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1985167 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1985167 /var/tmp/spdk2.sock 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1985167 /var/tmp/spdk2.sock 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1985167 /var/tmp/spdk2.sock 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1985167 ']' 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:29:38.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:38.913 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:38.913 [2024-06-11 03:26:20.149227] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:38.913 [2024-06-11 03:26:20.149277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1985167 ] 00:29:38.913 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.913 [2024-06-11 03:26:20.234599] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1985158 has claimed it. 00:29:38.913 [2024-06-11 03:26:20.234637] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:29:39.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1985167) - No such process 00:29:39.482 ERROR: process (pid: 1985167) is no longer running 00:29:39.482 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:39.482 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:29:39.482 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:29:39.482 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:39.482 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:39.482 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:39.482 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1985158 00:29:39.482 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1985158 00:29:39.482 03:26:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:29:40.050 lslocks: write error 00:29:40.050 03:26:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1985158 00:29:40.050 03:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1985158 ']' 00:29:40.050 03:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1985158 00:29:40.050 03:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:29:40.050 03:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:40.050 03:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1985158 00:29:40.050 03:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:40.050 03:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:40.050 03:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1985158' 00:29:40.050 killing process with pid 1985158 00:29:40.050 03:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1985158 00:29:40.050 03:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1985158 00:29:40.309 00:29:40.309 real 0m1.749s 00:29:40.309 user 0m1.837s 00:29:40.309 sys 0m0.622s 00:29:40.309 03:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:40.309 03:26:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:40.309 ************************************ 00:29:40.309 END TEST locking_app_on_locked_coremask 00:29:40.309 ************************************ 00:29:40.309 03:26:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:29:40.309 03:26:21 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:40.309 03:26:21 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:40.309 03:26:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:29:40.309 ************************************ 00:29:40.309 START TEST locking_overlapped_coremask 00:29:40.309 ************************************ 00:29:40.309 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:29:40.309 03:26:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1985498 00:29:40.309 03:26:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1985498 /var/tmp/spdk.sock 00:29:40.309 03:26:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:29:40.309 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 1985498 ']' 00:29:40.309 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.309 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:40.309 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.309 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:40.309 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:40.309 [2024-06-11 03:26:21.627949] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:40.309 [2024-06-11 03:26:21.628003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1985498 ] 00:29:40.309 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.309 [2024-06-11 03:26:21.688884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:40.568 [2024-06-11 03:26:21.729904] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.568 [2024-06-11 03:26:21.730008] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:40.568 [2024-06-11 03:26:21.730016] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.568 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:40.568 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:29:40.568 03:26:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1985646 00:29:40.568 03:26:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1985646 /var/tmp/spdk2.sock 00:29:40.568 03:26:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:29:40.569 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:29:40.569 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1985646 /var/tmp/spdk2.sock 00:29:40.569 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:29:40.569 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:40.569 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:29:40.569 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:40.569 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1985646 /var/tmp/spdk2.sock 00:29:40.569 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 1985646 ']' 00:29:40.569 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:29:40.569 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:40.569 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:29:40.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:29:40.569 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:40.569 03:26:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:40.569 [2024-06-11 03:26:21.961651] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:40.569 [2024-06-11 03:26:21.961702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1985646 ] 00:29:40.828 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.828 [2024-06-11 03:26:22.042831] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1985498 has claimed it. 00:29:40.828 [2024-06-11 03:26:22.042869] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:29:41.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1985646) - No such process 00:29:41.396 ERROR: process (pid: 1985646) is no longer running 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1985498 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 1985498 ']' 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 1985498 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1985498 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1985498' 00:29:41.396 killing process with pid 1985498 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 1985498 00:29:41.396 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 1985498 00:29:41.654 00:29:41.654 real 0m1.355s 00:29:41.654 user 0m3.670s 00:29:41.654 sys 0m0.384s 00:29:41.654 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:41.654 03:26:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:29:41.654 ************************************ 00:29:41.654 END TEST locking_overlapped_coremask 00:29:41.654 ************************************ 00:29:41.654 03:26:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:29:41.654 03:26:22 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:41.654 03:26:22 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:41.654 03:26:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:29:41.654 ************************************ 00:29:41.654 START TEST locking_overlapped_coremask_via_rpc 00:29:41.654 ************************************ 00:29:41.655 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:29:41.655 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1985808 00:29:41.655 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1985808 /var/tmp/spdk.sock 00:29:41.655 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:29:41.655 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1985808 ']' 00:29:41.655 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.655 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:41.655 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.655 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:41.655 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:41.655 [2024-06-11 03:26:23.051341] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:41.655 [2024-06-11 03:26:23.051381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1985808 ] 00:29:41.913 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.913 [2024-06-11 03:26:23.113705] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:29:41.913 [2024-06-11 03:26:23.113731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:41.913 [2024-06-11 03:26:23.156119] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.913 [2024-06-11 03:26:23.156217] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.913 [2024-06-11 03:26:23.156218] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.172 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:42.172 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:29:42.172 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1985910 00:29:42.172 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1985910 /var/tmp/spdk2.sock 00:29:42.172 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:29:42.172 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1985910 ']' 00:29:42.172 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:29:42.172 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:42.172 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:29:42.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:29:42.172 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:42.172 03:26:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:42.172 [2024-06-11 03:26:23.385763] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:42.172 [2024-06-11 03:26:23.385815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1985910 ] 00:29:42.172 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.172 [2024-06-11 03:26:23.470054] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:29:42.172 [2024-06-11 03:26:23.470083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:42.172 [2024-06-11 03:26:23.550877] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.172 [2024-06-11 03:26:23.550990] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.172 [2024-06-11 03:26:23.550991] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:43.109 [2024-06-11 03:26:24.190082] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1985808 has claimed it. 00:29:43.109 request: 00:29:43.109 { 00:29:43.109 "method": "framework_enable_cpumask_locks", 00:29:43.109 "req_id": 1 00:29:43.109 } 00:29:43.109 Got JSON-RPC error response 00:29:43.109 response: 00:29:43.109 { 00:29:43.109 "code": -32603, 00:29:43.109 "message": "Failed to claim CPU core: 2" 00:29:43.109 } 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1985808 /var/tmp/spdk.sock 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1985808 ']' 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:43.109 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:29:43.110 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1985910 /var/tmp/spdk2.sock 00:29:43.110 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1985910 ']' 00:29:43.110 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:29:43.110 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:43.110 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:29:43.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:29:43.110 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:43.110 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:43.368 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:43.368 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:29:43.368 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:29:43.368 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:29:43.368 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:29:43.368 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:29:43.368 00:29:43.368 real 0m1.589s 00:29:43.368 user 0m0.712s 00:29:43.368 sys 0m0.153s 00:29:43.368 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:43.369 03:26:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:29:43.369 ************************************ 00:29:43.369 END TEST locking_overlapped_coremask_via_rpc 00:29:43.369 ************************************ 00:29:43.369 03:26:24 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:29:43.369 03:26:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1985808 ]] 00:29:43.369 03:26:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1985808 00:29:43.369 03:26:24 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1985808 ']' 00:29:43.369 03:26:24 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1985808 00:29:43.369 03:26:24 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:29:43.369 03:26:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:43.369 03:26:24 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1985808 00:29:43.369 03:26:24 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:43.369 03:26:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:43.369 03:26:24 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1985808' 00:29:43.369 killing process with pid 1985808 00:29:43.369 03:26:24 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 1985808 00:29:43.369 03:26:24 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 1985808 00:29:43.630 03:26:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1985910 ]] 00:29:43.630 03:26:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1985910 00:29:43.630 03:26:24 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1985910 ']' 00:29:43.630 03:26:24 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1985910 00:29:43.630 03:26:24 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:29:43.630 03:26:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:43.630 03:26:24 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1985910 00:29:43.630 03:26:25 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:29:43.630 03:26:25 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:29:43.630 03:26:25 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1985910' 00:29:43.630 killing process with pid 1985910 00:29:43.630 03:26:25 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 1985910 00:29:43.630 03:26:25 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 1985910 00:29:44.198 03:26:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:29:44.198 03:26:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:29:44.198 03:26:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1985808 ]] 00:29:44.198 03:26:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1985808 00:29:44.198 03:26:25 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1985808 ']' 00:29:44.198 03:26:25 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1985808 00:29:44.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1985808) - No such process 00:29:44.198 03:26:25 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 1985808 is not found' 00:29:44.198 Process with pid 1985808 is not found 00:29:44.198 03:26:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1985910 ]] 00:29:44.198 03:26:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1985910 00:29:44.198 03:26:25 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1985910 ']' 00:29:44.198 03:26:25 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1985910 00:29:44.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1985910) - No such process 00:29:44.198 03:26:25 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 1985910 is not found' 00:29:44.198 Process with pid 1985910 is not found 00:29:44.198 03:26:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:29:44.198 00:29:44.198 real 0m13.090s 00:29:44.198 user 0m22.483s 00:29:44.198 sys 0m4.749s 00:29:44.198 03:26:25 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:44.198 03:26:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:29:44.198 ************************************ 00:29:44.198 END TEST cpu_locks 00:29:44.198 ************************************ 00:29:44.198 00:29:44.198 real 0m37.005s 00:29:44.198 user 1m10.333s 00:29:44.198 sys 0m7.955s 00:29:44.198 03:26:25 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:44.198 03:26:25 event -- common/autotest_common.sh@10 -- # set +x 00:29:44.198 ************************************ 00:29:44.198 END TEST event 00:29:44.198 ************************************ 00:29:44.198 03:26:25 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:29:44.198 03:26:25 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:44.199 03:26:25 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:44.199 03:26:25 -- common/autotest_common.sh@10 -- # set +x 00:29:44.199 ************************************ 00:29:44.199 START TEST thread 00:29:44.199 ************************************ 00:29:44.199 03:26:25 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:29:44.199 * Looking for test storage... 00:29:44.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:29:44.199 03:26:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:29:44.199 03:26:25 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:29:44.199 03:26:25 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:44.199 03:26:25 thread -- common/autotest_common.sh@10 -- # set +x 00:29:44.199 ************************************ 00:29:44.199 START TEST thread_poller_perf 00:29:44.199 ************************************ 00:29:44.199 03:26:25 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:29:44.199 [2024-06-11 03:26:25.581074] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:44.199 [2024-06-11 03:26:25.581143] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1986275 ] 00:29:44.458 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.458 [2024-06-11 03:26:25.645630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.458 [2024-06-11 03:26:25.687018] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.458 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:29:45.392 ====================================== 00:29:45.393 busy:2106103600 (cyc) 00:29:45.393 total_run_count: 420000 00:29:45.393 tsc_hz: 2100000000 (cyc) 00:29:45.393 ====================================== 00:29:45.393 poller_cost: 5014 (cyc), 2387 (nsec) 00:29:45.393 00:29:45.393 real 0m1.192s 00:29:45.393 user 0m1.103s 00:29:45.393 sys 0m0.082s 00:29:45.393 03:26:26 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:45.393 03:26:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:29:45.393 ************************************ 00:29:45.393 END TEST thread_poller_perf 00:29:45.393 ************************************ 00:29:45.393 03:26:26 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:29:45.393 03:26:26 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:29:45.393 03:26:26 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:45.393 03:26:26 thread -- common/autotest_common.sh@10 -- # set +x 00:29:45.651 ************************************ 00:29:45.651 START TEST thread_poller_perf 00:29:45.651 ************************************ 00:29:45.651 03:26:26 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:29:45.651 [2024-06-11 03:26:26.822330] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:45.651 [2024-06-11 03:26:26.822397] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1986503 ] 00:29:45.651 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.651 [2024-06-11 03:26:26.885199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.651 [2024-06-11 03:26:26.923505] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.651 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:29:46.589 ====================================== 00:29:46.589 busy:2101383038 (cyc) 00:29:46.589 total_run_count: 5589000 00:29:46.589 tsc_hz: 2100000000 (cyc) 00:29:46.589 ====================================== 00:29:46.589 poller_cost: 375 (cyc), 178 (nsec) 00:29:46.589 00:29:46.589 real 0m1.177s 00:29:46.589 user 0m1.094s 00:29:46.589 sys 0m0.080s 00:29:46.589 03:26:27 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:46.589 03:26:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:29:46.589 ************************************ 00:29:46.589 END TEST thread_poller_perf 00:29:46.589 ************************************ 00:29:46.847 03:26:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:29:46.847 00:29:46.847 real 0m2.569s 00:29:46.847 user 0m2.281s 00:29:46.847 sys 0m0.294s 00:29:46.847 03:26:28 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:46.847 03:26:28 thread -- common/autotest_common.sh@10 -- # set +x 00:29:46.847 ************************************ 00:29:46.847 END TEST thread 00:29:46.847 ************************************ 00:29:46.847 03:26:28 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:29:46.847 03:26:28 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:29:46.847 03:26:28 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:46.847 03:26:28 -- common/autotest_common.sh@10 -- # set +x 00:29:46.847 ************************************ 00:29:46.847 START TEST accel 00:29:46.847 ************************************ 00:29:46.847 03:26:28 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:29:46.847 * Looking for test storage... 00:29:46.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:29:46.847 03:26:28 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:29:46.847 03:26:28 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:29:46.847 03:26:28 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:29:46.847 03:26:28 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1986790 00:29:46.847 03:26:28 accel -- accel/accel.sh@63 -- # waitforlisten 1986790 00:29:46.847 03:26:28 accel -- common/autotest_common.sh@830 -- # '[' -z 1986790 ']' 00:29:46.847 03:26:28 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:29:46.847 03:26:28 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.847 03:26:28 accel -- accel/accel.sh@61 -- # build_accel_config 00:29:46.847 03:26:28 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:29:46.847 03:26:28 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.847 03:26:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:29:46.847 03:26:28 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:29:46.847 03:26:28 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:29:46.847 03:26:28 accel -- common/autotest_common.sh@10 -- # set +x 00:29:46.847 03:26:28 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:46.847 03:26:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:46.847 03:26:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:29:46.847 03:26:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:29:46.847 03:26:28 accel -- accel/accel.sh@41 -- # jq -r . 00:29:46.847 [2024-06-11 03:26:28.212493] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:46.847 [2024-06-11 03:26:28.212540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1986790 ] 00:29:46.847 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.106 [2024-06-11 03:26:28.271995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.106 [2024-06-11 03:26:28.314663] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.106 03:26:28 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:29:47.106 03:26:28 accel -- common/autotest_common.sh@863 -- # return 0 00:29:47.106 03:26:28 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:29:47.106 03:26:28 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:29:47.106 03:26:28 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:29:47.106 03:26:28 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:29:47.106 03:26:28 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:29:47.106 03:26:28 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:29:47.106 03:26:28 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:29:47.106 03:26:28 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:47.106 03:26:28 accel -- common/autotest_common.sh@10 -- # set +x 00:29:47.106 03:26:28 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:47.365 03:26:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # IFS== 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:29:47.365 03:26:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:29:47.365 03:26:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # IFS== 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:29:47.365 03:26:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:29:47.365 03:26:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # IFS== 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:29:47.365 03:26:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:29:47.365 03:26:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # IFS== 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:29:47.365 03:26:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:29:47.365 03:26:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # IFS== 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:29:47.365 03:26:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:29:47.365 03:26:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # IFS== 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:29:47.365 03:26:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:29:47.365 03:26:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # IFS== 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:29:47.365 03:26:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:29:47.365 03:26:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # IFS== 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:29:47.365 03:26:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:29:47.365 03:26:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # IFS== 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:29:47.365 03:26:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:29:47.365 03:26:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # IFS== 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:29:47.365 03:26:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:29:47.365 03:26:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # IFS== 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:29:47.365 03:26:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:29:47.365 03:26:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # IFS== 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:29:47.365 03:26:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:29:47.365 03:26:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # IFS== 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:29:47.365 03:26:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:29:47.365 03:26:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # IFS== 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:29:47.365 03:26:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:29:47.365 03:26:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # IFS== 00:29:47.365 03:26:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:29:47.365 03:26:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:29:47.365 03:26:28 accel -- accel/accel.sh@75 -- # killprocess 1986790 00:29:47.365 03:26:28 accel -- common/autotest_common.sh@949 -- # '[' -z 1986790 ']' 00:29:47.365 03:26:28 accel -- common/autotest_common.sh@953 -- # kill -0 1986790 00:29:47.365 03:26:28 accel -- common/autotest_common.sh@954 -- # uname 00:29:47.365 03:26:28 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:29:47.365 03:26:28 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1986790 00:29:47.365 03:26:28 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:29:47.365 03:26:28 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:29:47.365 03:26:28 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1986790' 00:29:47.365 killing process with pid 1986790 00:29:47.365 03:26:28 accel -- common/autotest_common.sh@968 -- # kill 1986790 00:29:47.365 03:26:28 accel -- common/autotest_common.sh@973 -- # wait 1986790 00:29:47.624 03:26:28 accel -- accel/accel.sh@76 -- # trap - ERR 00:29:47.624 03:26:28 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:29:47.624 03:26:28 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:29:47.624 03:26:28 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:47.624 03:26:28 accel -- common/autotest_common.sh@10 -- # set +x 00:29:47.624 03:26:28 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:29:47.624 03:26:28 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:29:47.624 03:26:28 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:29:47.624 03:26:28 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:29:47.624 03:26:28 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:29:47.624 03:26:28 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:47.624 03:26:28 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:47.624 03:26:28 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:29:47.624 03:26:28 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:29:47.624 03:26:28 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:29:47.624 03:26:28 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:47.624 03:26:28 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:29:47.624 03:26:28 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:29:47.624 03:26:28 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:29:47.624 03:26:28 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:47.624 03:26:28 accel -- common/autotest_common.sh@10 -- # set +x 00:29:47.624 ************************************ 00:29:47.624 START TEST accel_missing_filename 00:29:47.624 ************************************ 00:29:47.624 03:26:29 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:29:47.624 03:26:29 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:29:47.624 03:26:29 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:29:47.624 03:26:29 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:29:47.624 03:26:29 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:47.624 03:26:29 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:29:47.624 03:26:29 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:47.624 03:26:29 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:29:47.624 03:26:29 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:29:47.624 03:26:29 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:29:47.624 03:26:29 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:29:47.624 03:26:29 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:29:47.624 03:26:29 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:47.624 03:26:29 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:47.624 03:26:29 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:29:47.624 03:26:29 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:29:47.624 03:26:29 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:29:47.883 [2024-06-11 03:26:29.037144] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:47.883 [2024-06-11 03:26:29.037210] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1987048 ] 00:29:47.883 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.883 [2024-06-11 03:26:29.096867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.883 [2024-06-11 03:26:29.135804] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.883 [2024-06-11 03:26:29.176404] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:47.883 [2024-06-11 03:26:29.236200] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:29:48.143 A filename is required. 00:29:48.143 03:26:29 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:29:48.143 03:26:29 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:48.143 03:26:29 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:29:48.143 03:26:29 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:29:48.143 03:26:29 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:29:48.143 03:26:29 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:48.143 00:29:48.143 real 0m0.288s 00:29:48.143 user 0m0.201s 00:29:48.143 sys 0m0.124s 00:29:48.143 03:26:29 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:48.143 03:26:29 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:29:48.143 ************************************ 00:29:48.143 END TEST accel_missing_filename 00:29:48.143 ************************************ 00:29:48.143 03:26:29 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:29:48.143 03:26:29 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:29:48.143 03:26:29 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:48.143 03:26:29 accel -- common/autotest_common.sh@10 -- # set +x 00:29:48.143 ************************************ 00:29:48.143 START TEST accel_compress_verify 00:29:48.143 ************************************ 00:29:48.143 03:26:29 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:29:48.143 03:26:29 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:29:48.143 03:26:29 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:29:48.143 03:26:29 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:29:48.143 03:26:29 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:48.143 03:26:29 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:29:48.143 03:26:29 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:48.143 03:26:29 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:29:48.143 03:26:29 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:29:48.143 03:26:29 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:29:48.143 03:26:29 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:29:48.143 03:26:29 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:29:48.143 03:26:29 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:48.143 03:26:29 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:48.143 03:26:29 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:29:48.143 03:26:29 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:29:48.143 03:26:29 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:29:48.143 [2024-06-11 03:26:29.390054] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:48.143 [2024-06-11 03:26:29.390103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1987075 ] 00:29:48.143 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.143 [2024-06-11 03:26:29.451646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.143 [2024-06-11 03:26:29.491920] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.143 [2024-06-11 03:26:29.533087] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:48.402 [2024-06-11 03:26:29.593196] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:29:48.402 00:29:48.402 Compression does not support the verify option, aborting. 00:29:48.402 03:26:29 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:29:48.402 03:26:29 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:48.402 03:26:29 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:29:48.402 03:26:29 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:29:48.402 03:26:29 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:29:48.402 03:26:29 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:48.402 00:29:48.402 real 0m0.295s 00:29:48.402 user 0m0.208s 00:29:48.402 sys 0m0.124s 00:29:48.402 03:26:29 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:48.402 03:26:29 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:29:48.402 ************************************ 00:29:48.402 END TEST accel_compress_verify 00:29:48.402 ************************************ 00:29:48.402 03:26:29 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:29:48.402 03:26:29 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:29:48.402 03:26:29 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:48.402 03:26:29 accel -- common/autotest_common.sh@10 -- # set +x 00:29:48.402 ************************************ 00:29:48.402 START TEST accel_wrong_workload 00:29:48.402 ************************************ 00:29:48.402 03:26:29 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:29:48.402 03:26:29 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:29:48.402 03:26:29 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:29:48.402 03:26:29 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:29:48.402 03:26:29 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:48.402 03:26:29 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:29:48.402 03:26:29 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:48.402 03:26:29 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:29:48.403 03:26:29 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:29:48.403 03:26:29 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:29:48.403 03:26:29 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:29:48.403 03:26:29 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:29:48.403 03:26:29 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:48.403 03:26:29 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:48.403 03:26:29 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:29:48.403 03:26:29 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:29:48.403 03:26:29 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:29:48.403 Unsupported workload type: foobar 00:29:48.403 [2024-06-11 03:26:29.751890] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:29:48.403 accel_perf options: 00:29:48.403 [-h help message] 00:29:48.403 [-q queue depth per core] 00:29:48.403 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:29:48.403 [-T number of threads per core 00:29:48.403 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:29:48.403 [-t time in seconds] 00:29:48.403 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:29:48.403 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:29:48.403 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:29:48.403 [-l for compress/decompress workloads, name of uncompressed input file 00:29:48.403 [-S for crc32c workload, use this seed value (default 0) 00:29:48.403 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:29:48.403 [-f for fill workload, use this BYTE value (default 255) 00:29:48.403 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:29:48.403 [-y verify result if this switch is on] 00:29:48.403 [-a tasks to allocate per core (default: same value as -q)] 00:29:48.403 Can be used to spread operations across a wider range of memory. 00:29:48.403 03:26:29 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:29:48.403 03:26:29 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:48.403 03:26:29 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:48.403 03:26:29 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:48.403 00:29:48.403 real 0m0.031s 00:29:48.403 user 0m0.018s 00:29:48.403 sys 0m0.014s 00:29:48.403 03:26:29 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:48.403 03:26:29 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:29:48.403 ************************************ 00:29:48.403 END TEST accel_wrong_workload 00:29:48.403 ************************************ 00:29:48.403 Error: writing output failed: Broken pipe 00:29:48.403 03:26:29 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:29:48.403 03:26:29 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:29:48.403 03:26:29 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:48.403 03:26:29 accel -- common/autotest_common.sh@10 -- # set +x 00:29:48.689 ************************************ 00:29:48.689 START TEST accel_negative_buffers 00:29:48.689 ************************************ 00:29:48.689 03:26:29 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:29:48.689 03:26:29 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:29:48.689 03:26:29 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:29:48.689 03:26:29 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:29:48.689 03:26:29 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:48.689 03:26:29 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:29:48.689 03:26:29 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:48.689 03:26:29 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:29:48.689 03:26:29 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:29:48.689 03:26:29 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:29:48.689 03:26:29 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:29:48.689 03:26:29 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:29:48.689 03:26:29 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:48.689 03:26:29 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:48.689 03:26:29 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:29:48.689 03:26:29 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:29:48.689 03:26:29 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:29:48.689 -x option must be non-negative. 00:29:48.689 [2024-06-11 03:26:29.845702] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:29:48.689 accel_perf options: 00:29:48.689 [-h help message] 00:29:48.689 [-q queue depth per core] 00:29:48.689 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:29:48.689 [-T number of threads per core 00:29:48.689 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:29:48.689 [-t time in seconds] 00:29:48.689 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:29:48.689 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:29:48.689 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:29:48.689 [-l for compress/decompress workloads, name of uncompressed input file 00:29:48.689 [-S for crc32c workload, use this seed value (default 0) 00:29:48.689 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:29:48.689 [-f for fill workload, use this BYTE value (default 255) 00:29:48.689 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:29:48.689 [-y verify result if this switch is on] 00:29:48.689 [-a tasks to allocate per core (default: same value as -q)] 00:29:48.690 Can be used to spread operations across a wider range of memory. 00:29:48.690 03:26:29 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:29:48.690 03:26:29 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:48.690 03:26:29 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:48.690 03:26:29 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:48.690 00:29:48.690 real 0m0.029s 00:29:48.690 user 0m0.037s 00:29:48.690 sys 0m0.016s 00:29:48.690 03:26:29 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:48.690 03:26:29 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:29:48.690 ************************************ 00:29:48.690 END TEST accel_negative_buffers 00:29:48.690 ************************************ 00:29:48.690 03:26:29 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:29:48.690 03:26:29 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:29:48.690 03:26:29 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:48.690 03:26:29 accel -- common/autotest_common.sh@10 -- # set +x 00:29:48.690 ************************************ 00:29:48.690 START TEST accel_crc32c 00:29:48.690 ************************************ 00:29:48.690 03:26:29 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:29:48.690 03:26:29 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:29:48.690 03:26:29 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:29:48.690 03:26:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.690 03:26:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.690 03:26:29 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:29:48.690 03:26:29 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:29:48.690 03:26:29 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:29:48.690 03:26:29 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:29:48.690 03:26:29 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:29:48.690 03:26:29 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:48.690 03:26:29 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:48.690 03:26:29 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:29:48.690 03:26:29 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:29:48.690 03:26:29 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:29:48.690 [2024-06-11 03:26:29.927509] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:48.690 [2024-06-11 03:26:29.927573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1987353 ] 00:29:48.690 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.690 [2024-06-11 03:26:29.987167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.690 [2024-06-11 03:26:30.027914] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.690 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:48.949 03:26:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:49.885 03:26:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:29:49.885 03:26:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:49.885 03:26:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:49.885 03:26:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:29:49.886 03:26:31 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:49.886 00:29:49.886 real 0m1.297s 00:29:49.886 user 0m1.180s 00:29:49.886 sys 0m0.122s 00:29:49.886 03:26:31 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:49.886 03:26:31 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:29:49.886 ************************************ 00:29:49.886 END TEST accel_crc32c 00:29:49.886 ************************************ 00:29:49.886 03:26:31 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:29:49.886 03:26:31 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:29:49.886 03:26:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:49.886 03:26:31 accel -- common/autotest_common.sh@10 -- # set +x 00:29:49.886 ************************************ 00:29:49.886 START TEST accel_crc32c_C2 00:29:49.886 ************************************ 00:29:49.886 03:26:31 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:29:49.886 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:29:49.886 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:29:49.886 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:49.886 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:49.886 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:29:49.886 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:29:49.886 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:29:49.886 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:29:49.886 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:29:49.886 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:49.886 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:49.886 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:29:49.886 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:29:49.886 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:29:49.886 [2024-06-11 03:26:31.276945] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:49.886 [2024-06-11 03:26:31.276995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1987602 ] 00:29:50.144 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.145 [2024-06-11 03:26:31.337207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.145 [2024-06-11 03:26:31.376303] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:50.145 03:26:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:51.523 00:29:51.523 real 0m1.287s 00:29:51.523 user 0m1.173s 00:29:51.523 sys 0m0.120s 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:51.523 03:26:32 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:29:51.524 ************************************ 00:29:51.524 END TEST accel_crc32c_C2 00:29:51.524 ************************************ 00:29:51.524 03:26:32 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:29:51.524 03:26:32 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:29:51.524 03:26:32 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:51.524 03:26:32 accel -- common/autotest_common.sh@10 -- # set +x 00:29:51.524 ************************************ 00:29:51.524 START TEST accel_copy 00:29:51.524 ************************************ 00:29:51.524 03:26:32 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:29:51.524 [2024-06-11 03:26:32.626307] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:51.524 [2024-06-11 03:26:32.626372] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1987838 ] 00:29:51.524 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.524 [2024-06-11 03:26:32.684824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.524 [2024-06-11 03:26:32.723908] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:51.524 03:26:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:29:52.904 03:26:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:52.904 00:29:52.904 real 0m1.291s 00:29:52.904 user 0m1.180s 00:29:52.904 sys 0m0.117s 00:29:52.904 03:26:33 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:52.904 03:26:33 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:29:52.904 ************************************ 00:29:52.904 END TEST accel_copy 00:29:52.904 ************************************ 00:29:52.904 03:26:33 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:29:52.904 03:26:33 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:29:52.904 03:26:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:52.904 03:26:33 accel -- common/autotest_common.sh@10 -- # set +x 00:29:52.904 ************************************ 00:29:52.904 START TEST accel_fill 00:29:52.904 ************************************ 00:29:52.904 03:26:33 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:29:52.904 03:26:33 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:29:52.904 03:26:33 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:29:52.904 03:26:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.904 03:26:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.904 03:26:33 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:29:52.904 03:26:33 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:29:52.904 03:26:33 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:29:52.904 03:26:33 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:29:52.904 03:26:33 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:29:52.904 03:26:33 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:52.904 03:26:33 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:52.904 03:26:33 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:29:52.904 03:26:33 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:29:52.904 03:26:33 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:29:52.904 [2024-06-11 03:26:33.978371] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:52.904 [2024-06-11 03:26:33.978438] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1988078 ] 00:29:52.905 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.905 [2024-06-11 03:26:34.038143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.905 [2024-06-11 03:26:34.077166] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:52.905 03:26:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:29:53.843 03:26:35 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:53.843 00:29:53.843 real 0m1.292s 00:29:53.843 user 0m1.181s 00:29:53.843 sys 0m0.118s 00:29:53.843 03:26:35 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:53.843 03:26:35 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:29:53.843 ************************************ 00:29:53.843 END TEST accel_fill 00:29:53.843 ************************************ 00:29:54.103 03:26:35 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:29:54.103 03:26:35 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:29:54.103 03:26:35 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:54.103 03:26:35 accel -- common/autotest_common.sh@10 -- # set +x 00:29:54.103 ************************************ 00:29:54.103 START TEST accel_copy_crc32c 00:29:54.103 ************************************ 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:29:54.103 [2024-06-11 03:26:35.327300] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:54.103 [2024-06-11 03:26:35.327354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1988299 ] 00:29:54.103 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.103 [2024-06-11 03:26:35.389116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.103 [2024-06-11 03:26:35.428260] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:54.103 03:26:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:29:55.481 03:26:36 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:55.481 00:29:55.481 real 0m1.294s 00:29:55.481 user 0m1.181s 00:29:55.482 sys 0m0.119s 00:29:55.482 03:26:36 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:55.482 03:26:36 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:29:55.482 ************************************ 00:29:55.482 END TEST accel_copy_crc32c 00:29:55.482 ************************************ 00:29:55.482 03:26:36 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:29:55.482 03:26:36 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:29:55.482 03:26:36 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:55.482 03:26:36 accel -- common/autotest_common.sh@10 -- # set +x 00:29:55.482 ************************************ 00:29:55.482 START TEST accel_copy_crc32c_C2 00:29:55.482 ************************************ 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:29:55.482 [2024-06-11 03:26:36.679206] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:55.482 [2024-06-11 03:26:36.679260] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1988521 ] 00:29:55.482 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.482 [2024-06-11 03:26:36.739701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.482 [2024-06-11 03:26:36.778737] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:55.482 03:26:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:56.858 00:29:56.858 real 0m1.293s 00:29:56.858 user 0m1.180s 00:29:56.858 sys 0m0.120s 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:56.858 03:26:37 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:29:56.858 ************************************ 00:29:56.858 END TEST accel_copy_crc32c_C2 00:29:56.858 ************************************ 00:29:56.858 03:26:37 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:29:56.858 03:26:37 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:29:56.858 03:26:37 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:56.858 03:26:37 accel -- common/autotest_common.sh@10 -- # set +x 00:29:56.858 ************************************ 00:29:56.858 START TEST accel_dualcast 00:29:56.858 ************************************ 00:29:56.858 03:26:38 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:29:56.858 [2024-06-11 03:26:38.015981] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:56.858 [2024-06-11 03:26:38.016023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1988758 ] 00:29:56.858 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.858 [2024-06-11 03:26:38.073546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.858 [2024-06-11 03:26:38.113166] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.858 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:56.859 03:26:38 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:58.237 03:26:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:29:58.238 03:26:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:29:58.238 03:26:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:29:58.238 03:26:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:58.238 00:29:58.238 real 0m1.278s 00:29:58.238 user 0m1.170s 00:29:58.238 sys 0m0.115s 00:29:58.238 03:26:39 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:58.238 03:26:39 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:29:58.238 ************************************ 00:29:58.238 END TEST accel_dualcast 00:29:58.238 ************************************ 00:29:58.238 03:26:39 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:29:58.238 03:26:39 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:29:58.238 03:26:39 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:58.238 03:26:39 accel -- common/autotest_common.sh@10 -- # set +x 00:29:58.238 ************************************ 00:29:58.238 START TEST accel_compare 00:29:58.238 ************************************ 00:29:58.238 03:26:39 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:29:58.238 [2024-06-11 03:26:39.374586] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:58.238 [2024-06-11 03:26:39.374632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1989003 ] 00:29:58.238 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.238 [2024-06-11 03:26:39.436365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.238 [2024-06-11 03:26:39.476168] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:58.238 03:26:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:29:59.616 03:26:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:59.616 00:29:59.616 real 0m1.295s 00:29:59.616 user 0m1.190s 00:29:59.616 sys 0m0.111s 00:29:59.616 03:26:40 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:29:59.616 03:26:40 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:29:59.616 ************************************ 00:29:59.616 END TEST accel_compare 00:29:59.616 ************************************ 00:29:59.616 03:26:40 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:29:59.616 03:26:40 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:29:59.616 03:26:40 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:29:59.616 03:26:40 accel -- common/autotest_common.sh@10 -- # set +x 00:29:59.616 ************************************ 00:29:59.616 START TEST accel_xor 00:29:59.616 ************************************ 00:29:59.617 03:26:40 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:29:59.617 [2024-06-11 03:26:40.725096] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:29:59.617 [2024-06-11 03:26:40.725160] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1989247 ] 00:29:59.617 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.617 [2024-06-11 03:26:40.784295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.617 [2024-06-11 03:26:40.823810] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:29:59.617 03:26:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:30:00.995 03:26:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:00.995 00:30:00.995 real 0m1.292s 00:30:00.995 user 0m1.178s 00:30:00.995 sys 0m0.121s 00:30:00.995 03:26:41 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:00.995 03:26:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:30:00.995 ************************************ 00:30:00.995 END TEST accel_xor 00:30:00.995 ************************************ 00:30:00.995 03:26:42 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:30:00.995 03:26:42 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:30:00.995 03:26:42 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:00.995 03:26:42 accel -- common/autotest_common.sh@10 -- # set +x 00:30:00.995 ************************************ 00:30:00.995 START TEST accel_xor 00:30:00.995 ************************************ 00:30:00.995 03:26:42 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:30:00.995 [2024-06-11 03:26:42.076579] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:30:00.995 [2024-06-11 03:26:42.076644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1989479 ] 00:30:00.995 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.995 [2024-06-11 03:26:42.137705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.995 [2024-06-11 03:26:42.177054] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.995 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:00.996 03:26:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:30:02.373 03:26:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:02.373 00:30:02.373 real 0m1.296s 00:30:02.373 user 0m1.181s 00:30:02.373 sys 0m0.121s 00:30:02.373 03:26:43 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:02.373 03:26:43 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:30:02.373 ************************************ 00:30:02.373 END TEST accel_xor 00:30:02.373 ************************************ 00:30:02.373 03:26:43 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:30:02.373 03:26:43 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:30:02.374 03:26:43 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:02.374 03:26:43 accel -- common/autotest_common.sh@10 -- # set +x 00:30:02.374 ************************************ 00:30:02.374 START TEST accel_dif_verify 00:30:02.374 ************************************ 00:30:02.374 03:26:43 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:30:02.374 [2024-06-11 03:26:43.421079] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:30:02.374 [2024-06-11 03:26:43.421145] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1989700 ] 00:30:02.374 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.374 [2024-06-11 03:26:43.480513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.374 [2024-06-11 03:26:43.520485] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:02.374 03:26:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:30:03.311 03:26:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:03.311 00:30:03.311 real 0m1.293s 00:30:03.311 user 0m1.177s 00:30:03.311 sys 0m0.123s 00:30:03.312 03:26:44 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:03.312 03:26:44 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:30:03.312 ************************************ 00:30:03.312 END TEST accel_dif_verify 00:30:03.312 ************************************ 00:30:03.572 03:26:44 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:30:03.572 03:26:44 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:30:03.572 03:26:44 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:03.572 03:26:44 accel -- common/autotest_common.sh@10 -- # set +x 00:30:03.572 ************************************ 00:30:03.572 START TEST accel_dif_generate 00:30:03.572 ************************************ 00:30:03.572 03:26:44 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:30:03.572 [2024-06-11 03:26:44.772312] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:30:03.572 [2024-06-11 03:26:44.772378] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1989924 ] 00:30:03.572 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.572 [2024-06-11 03:26:44.833106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.572 [2024-06-11 03:26:44.872239] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:30:03.572 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:03.573 03:26:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:30:04.953 03:26:46 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:04.953 00:30:04.953 real 0m1.294s 00:30:04.953 user 0m1.182s 00:30:04.953 sys 0m0.119s 00:30:04.953 03:26:46 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:04.953 03:26:46 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:30:04.953 ************************************ 00:30:04.953 END TEST accel_dif_generate 00:30:04.953 ************************************ 00:30:04.953 03:26:46 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:30:04.953 03:26:46 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:30:04.953 03:26:46 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:04.953 03:26:46 accel -- common/autotest_common.sh@10 -- # set +x 00:30:04.953 ************************************ 00:30:04.953 START TEST accel_dif_generate_copy 00:30:04.953 ************************************ 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:30:04.954 [2024-06-11 03:26:46.112835] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:30:04.954 [2024-06-11 03:26:46.112878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1990154 ] 00:30:04.954 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.954 [2024-06-11 03:26:46.170995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.954 [2024-06-11 03:26:46.210110] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:04.954 03:26:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:06.333 00:30:06.333 real 0m1.281s 00:30:06.333 user 0m1.179s 00:30:06.333 sys 0m0.108s 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:06.333 03:26:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:30:06.333 ************************************ 00:30:06.333 END TEST accel_dif_generate_copy 00:30:06.333 ************************************ 00:30:06.333 03:26:47 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:30:06.333 03:26:47 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:30:06.333 03:26:47 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:30:06.333 03:26:47 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:06.333 03:26:47 accel -- common/autotest_common.sh@10 -- # set +x 00:30:06.333 ************************************ 00:30:06.333 START TEST accel_comp 00:30:06.333 ************************************ 00:30:06.333 03:26:47 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:30:06.333 [2024-06-11 03:26:47.460043] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:30:06.333 [2024-06-11 03:26:47.460109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1990383 ] 00:30:06.333 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.333 [2024-06-11 03:26:47.520610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.333 [2024-06-11 03:26:47.559786] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:30:06.333 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:06.334 03:26:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:30:07.713 03:26:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:07.713 00:30:07.713 real 0m1.295s 00:30:07.713 user 0m1.176s 00:30:07.713 sys 0m0.125s 00:30:07.713 03:26:48 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:07.713 03:26:48 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:30:07.713 ************************************ 00:30:07.713 END TEST accel_comp 00:30:07.713 ************************************ 00:30:07.713 03:26:48 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:30:07.713 03:26:48 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:30:07.713 03:26:48 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:07.713 03:26:48 accel -- common/autotest_common.sh@10 -- # set +x 00:30:07.713 ************************************ 00:30:07.713 START TEST accel_decomp 00:30:07.713 ************************************ 00:30:07.713 03:26:48 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:30:07.713 [2024-06-11 03:26:48.812549] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:30:07.713 [2024-06-11 03:26:48.812617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1990624 ] 00:30:07.713 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.713 [2024-06-11 03:26:48.875522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.713 [2024-06-11 03:26:48.914592] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.713 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:07.714 03:26:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:30:09.091 03:26:50 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:09.091 00:30:09.091 real 0m1.302s 00:30:09.091 user 0m1.183s 00:30:09.091 sys 0m0.125s 00:30:09.091 03:26:50 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:09.091 03:26:50 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:30:09.091 ************************************ 00:30:09.091 END TEST accel_decomp 00:30:09.091 ************************************ 00:30:09.091 03:26:50 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:30:09.091 03:26:50 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:30:09.091 03:26:50 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:09.091 03:26:50 accel -- common/autotest_common.sh@10 -- # set +x 00:30:09.091 ************************************ 00:30:09.091 START TEST accel_decomp_full 00:30:09.091 ************************************ 00:30:09.091 03:26:50 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:30:09.091 03:26:50 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:30:09.091 03:26:50 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:30:09.091 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.091 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.091 03:26:50 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:30:09.091 03:26:50 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:30:09.092 [2024-06-11 03:26:50.170563] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:30:09.092 [2024-06-11 03:26:50.170609] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1990880 ] 00:30:09.092 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.092 [2024-06-11 03:26:50.230360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.092 [2024-06-11 03:26:50.270329] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:09.092 03:26:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:30:10.470 03:26:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:10.470 00:30:10.470 real 0m1.300s 00:30:10.470 user 0m1.193s 00:30:10.470 sys 0m0.112s 00:30:10.470 03:26:51 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:10.470 03:26:51 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:30:10.470 ************************************ 00:30:10.470 END TEST accel_decomp_full 00:30:10.470 ************************************ 00:30:10.470 03:26:51 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:30:10.470 03:26:51 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:30:10.470 03:26:51 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:10.470 03:26:51 accel -- common/autotest_common.sh@10 -- # set +x 00:30:10.470 ************************************ 00:30:10.470 START TEST accel_decomp_mcore 00:30:10.470 ************************************ 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:30:10.470 [2024-06-11 03:26:51.531537] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:30:10.470 [2024-06-11 03:26:51.531603] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1991125 ] 00:30:10.470 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.470 [2024-06-11 03:26:51.594285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:10.470 [2024-06-11 03:26:51.636982] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.470 [2024-06-11 03:26:51.637085] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:10.470 [2024-06-11 03:26:51.637110] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:10.470 [2024-06-11 03:26:51.637112] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:30:10.470 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:10.471 03:26:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.408 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:11.408 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.408 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.408 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.408 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:11.667 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.667 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.667 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:11.668 00:30:11.668 real 0m1.316s 00:30:11.668 user 0m4.526s 00:30:11.668 sys 0m0.134s 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:11.668 03:26:52 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:30:11.668 ************************************ 00:30:11.668 END TEST accel_decomp_mcore 00:30:11.668 ************************************ 00:30:11.668 03:26:52 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:30:11.668 03:26:52 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:30:11.668 03:26:52 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:11.668 03:26:52 accel -- common/autotest_common.sh@10 -- # set +x 00:30:11.668 ************************************ 00:30:11.668 START TEST accel_decomp_full_mcore 00:30:11.668 ************************************ 00:30:11.668 03:26:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:30:11.668 03:26:52 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:30:11.668 03:26:52 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:30:11.668 03:26:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:52 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:30:11.668 03:26:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:30:11.668 03:26:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:30:11.668 03:26:52 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:30:11.668 03:26:52 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:30:11.668 03:26:52 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:11.668 03:26:52 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:11.668 03:26:52 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:30:11.668 03:26:52 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:30:11.668 03:26:52 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:30:11.668 [2024-06-11 03:26:52.911818] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:30:11.668 [2024-06-11 03:26:52.911885] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1991375 ] 00:30:11.668 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.668 [2024-06-11 03:26:52.972126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:11.668 [2024-06-11 03:26:53.014995] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.668 [2024-06-11 03:26:53.015093] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.668 [2024-06-11 03:26:53.015115] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:11.668 [2024-06-11 03:26:53.015117] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.668 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:11.928 03:26:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:12.865 00:30:12.865 real 0m1.321s 00:30:12.865 user 0m4.567s 00:30:12.865 sys 0m0.125s 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:12.865 03:26:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:30:12.865 ************************************ 00:30:12.865 END TEST accel_decomp_full_mcore 00:30:12.865 ************************************ 00:30:12.865 03:26:54 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:30:12.865 03:26:54 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:30:12.865 03:26:54 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:12.865 03:26:54 accel -- common/autotest_common.sh@10 -- # set +x 00:30:13.124 ************************************ 00:30:13.124 START TEST accel_decomp_mthread 00:30:13.124 ************************************ 00:30:13.124 03:26:54 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:30:13.124 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:30:13.124 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:30:13.124 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.124 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.124 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:30:13.124 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:30:13.124 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:30:13.124 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:30:13.124 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:30:13.124 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:13.124 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:13.124 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:30:13.125 [2024-06-11 03:26:54.289008] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:30:13.125 [2024-06-11 03:26:54.289048] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1991636 ] 00:30:13.125 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.125 [2024-06-11 03:26:54.345932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.125 [2024-06-11 03:26:54.384876] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:13.125 03:26:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:14.547 00:30:14.547 real 0m1.284s 00:30:14.547 user 0m1.173s 00:30:14.547 sys 0m0.118s 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:14.547 03:26:55 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:30:14.547 ************************************ 00:30:14.547 END TEST accel_decomp_mthread 00:30:14.547 ************************************ 00:30:14.547 03:26:55 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:30:14.547 03:26:55 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:30:14.547 03:26:55 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:14.547 03:26:55 accel -- common/autotest_common.sh@10 -- # set +x 00:30:14.547 ************************************ 00:30:14.547 START TEST accel_decomp_full_mthread 00:30:14.547 ************************************ 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:30:14.547 [2024-06-11 03:26:55.639593] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:30:14.547 [2024-06-11 03:26:55.639657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1991885 ] 00:30:14.547 EAL: No free 2048 kB hugepages reported on node 1 00:30:14.547 [2024-06-11 03:26:55.700505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.547 [2024-06-11 03:26:55.740055] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:14.547 03:26:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:15.925 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:15.926 00:30:15.926 real 0m1.318s 00:30:15.926 user 0m1.202s 00:30:15.926 sys 0m0.122s 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:15.926 03:26:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:30:15.926 ************************************ 00:30:15.926 END TEST accel_decomp_full_mthread 00:30:15.926 ************************************ 00:30:15.926 03:26:56 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:30:15.926 03:26:56 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:30:15.926 03:26:56 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:30:15.926 03:26:56 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:15.926 03:26:56 accel -- common/autotest_common.sh@10 -- # set +x 00:30:15.926 03:26:56 accel -- accel/accel.sh@137 -- # build_accel_config 00:30:15.926 03:26:56 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:30:15.926 03:26:56 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:30:15.926 03:26:56 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:30:15.926 03:26:56 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:30:15.926 03:26:56 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:30:15.926 03:26:56 accel -- accel/accel.sh@40 -- # local IFS=, 00:30:15.926 03:26:56 accel -- accel/accel.sh@41 -- # jq -r . 00:30:15.926 ************************************ 00:30:15.926 START TEST accel_dif_functional_tests 00:30:15.926 ************************************ 00:30:15.926 03:26:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:30:15.926 [2024-06-11 03:26:57.034554] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:30:15.926 [2024-06-11 03:26:57.034588] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1992132 ] 00:30:15.926 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.926 [2024-06-11 03:26:57.090749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:15.926 [2024-06-11 03:26:57.131910] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.926 [2024-06-11 03:26:57.132015] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:15.926 [2024-06-11 03:26:57.132016] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.926 00:30:15.926 00:30:15.926 CUnit - A unit testing framework for C - Version 2.1-3 00:30:15.926 http://cunit.sourceforge.net/ 00:30:15.926 00:30:15.926 00:30:15.926 Suite: accel_dif 00:30:15.926 Test: verify: DIF generated, GUARD check ...passed 00:30:15.926 Test: verify: DIF generated, APPTAG check ...passed 00:30:15.926 Test: verify: DIF generated, REFTAG check ...passed 00:30:15.926 Test: verify: DIF not generated, GUARD check ...[2024-06-11 03:26:57.195058] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:30:15.926 passed 00:30:15.926 Test: verify: DIF not generated, APPTAG check ...[2024-06-11 03:26:57.195106] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:30:15.926 passed 00:30:15.926 Test: verify: DIF not generated, REFTAG check ...[2024-06-11 03:26:57.195140] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:30:15.926 passed 00:30:15.926 Test: verify: APPTAG correct, APPTAG check ...passed 00:30:15.926 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-11 03:26:57.195187] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:30:15.926 passed 00:30:15.926 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:30:15.926 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:30:15.926 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:30:15.926 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-11 03:26:57.195284] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:30:15.926 passed 00:30:15.926 Test: verify copy: DIF generated, GUARD check ...passed 00:30:15.926 Test: verify copy: DIF generated, APPTAG check ...passed 00:30:15.926 Test: verify copy: DIF generated, REFTAG check ...passed 00:30:15.926 Test: verify copy: DIF not generated, GUARD check ...[2024-06-11 03:26:57.195384] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:30:15.926 passed 00:30:15.926 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-11 03:26:57.195403] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:30:15.926 passed 00:30:15.926 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-11 03:26:57.195421] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:30:15.926 passed 00:30:15.926 Test: generate copy: DIF generated, GUARD check ...passed 00:30:15.926 Test: generate copy: DIF generated, APTTAG check ...passed 00:30:15.926 Test: generate copy: DIF generated, REFTAG check ...passed 00:30:15.926 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:30:15.926 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:30:15.926 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:30:15.926 Test: generate copy: iovecs-len validate ...[2024-06-11 03:26:57.195572] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:30:15.926 passed 00:30:15.926 Test: generate copy: buffer alignment validate ...passed 00:30:15.926 00:30:15.926 Run Summary: Type Total Ran Passed Failed Inactive 00:30:15.926 suites 1 1 n/a 0 0 00:30:15.926 tests 26 26 26 0 0 00:30:15.926 asserts 115 115 115 0 n/a 00:30:15.926 00:30:15.926 Elapsed time = 0.002 seconds 00:30:16.186 00:30:16.186 real 0m0.365s 00:30:16.186 user 0m0.565s 00:30:16.186 sys 0m0.143s 00:30:16.186 03:26:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:16.186 03:26:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:30:16.186 ************************************ 00:30:16.186 END TEST accel_dif_functional_tests 00:30:16.186 ************************************ 00:30:16.186 00:30:16.186 real 0m29.309s 00:30:16.186 user 0m32.848s 00:30:16.186 sys 0m4.203s 00:30:16.186 03:26:57 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:16.186 03:26:57 accel -- common/autotest_common.sh@10 -- # set +x 00:30:16.186 ************************************ 00:30:16.186 END TEST accel 00:30:16.186 ************************************ 00:30:16.186 03:26:57 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:30:16.186 03:26:57 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:16.186 03:26:57 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:16.186 03:26:57 -- common/autotest_common.sh@10 -- # set +x 00:30:16.186 ************************************ 00:30:16.186 START TEST accel_rpc 00:30:16.186 ************************************ 00:30:16.186 03:26:57 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:30:16.186 * Looking for test storage... 00:30:16.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:30:16.186 03:26:57 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:30:16.186 03:26:57 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1992414 00:30:16.186 03:26:57 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1992414 00:30:16.186 03:26:57 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:30:16.186 03:26:57 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 1992414 ']' 00:30:16.186 03:26:57 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.186 03:26:57 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:16.186 03:26:57 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.186 03:26:57 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:16.186 03:26:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:16.445 [2024-06-11 03:26:57.594318] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:30:16.445 [2024-06-11 03:26:57.594365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1992414 ] 00:30:16.445 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.445 [2024-06-11 03:26:57.650632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.445 [2024-06-11 03:26:57.690929] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.445 03:26:57 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:16.445 03:26:57 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:30:16.445 03:26:57 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:30:16.445 03:26:57 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:30:16.445 03:26:57 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:30:16.445 03:26:57 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:30:16.445 03:26:57 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:30:16.445 03:26:57 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:16.445 03:26:57 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:16.445 03:26:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:16.445 ************************************ 00:30:16.445 START TEST accel_assign_opcode 00:30:16.445 ************************************ 00:30:16.445 03:26:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:30:16.445 03:26:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:30:16.445 03:26:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:16.445 03:26:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:30:16.445 [2024-06-11 03:26:57.755412] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:30:16.445 03:26:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:16.445 03:26:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:30:16.445 03:26:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:16.445 03:26:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:30:16.445 [2024-06-11 03:26:57.763419] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:30:16.445 03:26:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:16.445 03:26:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:30:16.445 03:26:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:16.445 03:26:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:30:16.704 03:26:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:16.704 03:26:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:30:16.704 03:26:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:16.704 03:26:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:30:16.704 03:26:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:30:16.704 03:26:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:30:16.704 03:26:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:16.704 software 00:30:16.704 00:30:16.704 real 0m0.227s 00:30:16.704 user 0m0.045s 00:30:16.704 sys 0m0.007s 00:30:16.704 03:26:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:16.704 03:26:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:30:16.704 ************************************ 00:30:16.704 END TEST accel_assign_opcode 00:30:16.704 ************************************ 00:30:16.704 03:26:58 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1992414 00:30:16.704 03:26:58 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 1992414 ']' 00:30:16.704 03:26:58 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 1992414 00:30:16.704 03:26:58 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:30:16.704 03:26:58 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:16.704 03:26:58 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1992414 00:30:16.704 03:26:58 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:16.704 03:26:58 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:16.704 03:26:58 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1992414' 00:30:16.704 killing process with pid 1992414 00:30:16.704 03:26:58 accel_rpc -- common/autotest_common.sh@968 -- # kill 1992414 00:30:16.704 03:26:58 accel_rpc -- common/autotest_common.sh@973 -- # wait 1992414 00:30:16.963 00:30:16.963 real 0m0.887s 00:30:16.963 user 0m0.822s 00:30:16.963 sys 0m0.376s 00:30:16.963 03:26:58 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:16.963 03:26:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:16.963 ************************************ 00:30:16.963 END TEST accel_rpc 00:30:16.963 ************************************ 00:30:17.222 03:26:58 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:30:17.222 03:26:58 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:17.222 03:26:58 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:17.222 03:26:58 -- common/autotest_common.sh@10 -- # set +x 00:30:17.222 ************************************ 00:30:17.222 START TEST app_cmdline 00:30:17.222 ************************************ 00:30:17.222 03:26:58 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:30:17.222 * Looking for test storage... 00:30:17.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:30:17.222 03:26:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:30:17.222 03:26:58 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:30:17.222 03:26:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1992504 00:30:17.222 03:26:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1992504 00:30:17.222 03:26:58 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 1992504 ']' 00:30:17.222 03:26:58 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.222 03:26:58 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:17.222 03:26:58 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.222 03:26:58 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:17.222 03:26:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:30:17.222 [2024-06-11 03:26:58.526823] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:30:17.222 [2024-06-11 03:26:58.526872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1992504 ] 00:30:17.222 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.222 [2024-06-11 03:26:58.581885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.222 [2024-06-11 03:26:58.624015] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.481 03:26:58 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:17.481 03:26:58 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:30:17.481 03:26:58 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:30:17.740 { 00:30:17.740 "version": "SPDK v24.09-pre git sha1 5f5c52753", 00:30:17.740 "fields": { 00:30:17.740 "major": 24, 00:30:17.740 "minor": 9, 00:30:17.740 "patch": 0, 00:30:17.740 "suffix": "-pre", 00:30:17.740 "commit": "5f5c52753" 00:30:17.740 } 00:30:17.740 } 00:30:17.740 03:26:58 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:30:17.740 03:26:58 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:30:17.740 03:26:58 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:30:17.740 03:26:58 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:30:17.740 03:26:58 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:30:17.740 03:26:58 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:17.740 03:26:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:30:17.740 03:26:58 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:30:17.740 03:26:58 app_cmdline -- app/cmdline.sh@26 -- # sort 00:30:17.740 03:26:58 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:17.740 03:26:59 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:30:17.740 03:26:59 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:30:17.740 03:26:59 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:30:17.740 03:26:59 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:30:17.740 03:26:59 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:30:17.740 03:26:59 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:17.740 03:26:59 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:17.740 03:26:59 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:17.740 03:26:59 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:17.740 03:26:59 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:17.740 03:26:59 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:30:17.740 03:26:59 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:17.740 03:26:59 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:17.740 03:26:59 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:30:17.999 request: 00:30:17.999 { 00:30:17.999 "method": "env_dpdk_get_mem_stats", 00:30:17.999 "req_id": 1 00:30:17.999 } 00:30:17.999 Got JSON-RPC error response 00:30:17.999 response: 00:30:17.999 { 00:30:17.999 "code": -32601, 00:30:17.999 "message": "Method not found" 00:30:17.999 } 00:30:17.999 03:26:59 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:30:17.999 03:26:59 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:30:17.999 03:26:59 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:30:17.999 03:26:59 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:30:17.999 03:26:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1992504 00:30:18.000 03:26:59 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 1992504 ']' 00:30:18.000 03:26:59 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 1992504 00:30:18.000 03:26:59 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:30:18.000 03:26:59 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:18.000 03:26:59 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1992504 00:30:18.000 03:26:59 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:18.000 03:26:59 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:18.000 03:26:59 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1992504' 00:30:18.000 killing process with pid 1992504 00:30:18.000 03:26:59 app_cmdline -- common/autotest_common.sh@968 -- # kill 1992504 00:30:18.000 03:26:59 app_cmdline -- common/autotest_common.sh@973 -- # wait 1992504 00:30:18.258 00:30:18.258 real 0m1.115s 00:30:18.258 user 0m1.318s 00:30:18.258 sys 0m0.376s 00:30:18.258 03:26:59 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:18.258 03:26:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:30:18.258 ************************************ 00:30:18.258 END TEST app_cmdline 00:30:18.258 ************************************ 00:30:18.258 03:26:59 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:30:18.258 03:26:59 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:30:18.258 03:26:59 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:18.258 03:26:59 -- common/autotest_common.sh@10 -- # set +x 00:30:18.258 ************************************ 00:30:18.258 START TEST version 00:30:18.258 ************************************ 00:30:18.258 03:26:59 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:30:18.517 * Looking for test storage... 00:30:18.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:30:18.517 03:26:59 version -- app/version.sh@17 -- # get_header_version major 00:30:18.517 03:26:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:30:18.517 03:26:59 version -- app/version.sh@14 -- # cut -f2 00:30:18.517 03:26:59 version -- app/version.sh@14 -- # tr -d '"' 00:30:18.517 03:26:59 version -- app/version.sh@17 -- # major=24 00:30:18.517 03:26:59 version -- app/version.sh@18 -- # get_header_version minor 00:30:18.517 03:26:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:30:18.517 03:26:59 version -- app/version.sh@14 -- # cut -f2 00:30:18.517 03:26:59 version -- app/version.sh@14 -- # tr -d '"' 00:30:18.517 03:26:59 version -- app/version.sh@18 -- # minor=9 00:30:18.517 03:26:59 version -- app/version.sh@19 -- # get_header_version patch 00:30:18.517 03:26:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:30:18.517 03:26:59 version -- app/version.sh@14 -- # cut -f2 00:30:18.517 03:26:59 version -- app/version.sh@14 -- # tr -d '"' 00:30:18.517 03:26:59 version -- app/version.sh@19 -- # patch=0 00:30:18.517 03:26:59 version -- app/version.sh@20 -- # get_header_version suffix 00:30:18.517 03:26:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:30:18.517 03:26:59 version -- app/version.sh@14 -- # cut -f2 00:30:18.517 03:26:59 version -- app/version.sh@14 -- # tr -d '"' 00:30:18.517 03:26:59 version -- app/version.sh@20 -- # suffix=-pre 00:30:18.517 03:26:59 version -- app/version.sh@22 -- # version=24.9 00:30:18.517 03:26:59 version -- app/version.sh@25 -- # (( patch != 0 )) 00:30:18.517 03:26:59 version -- app/version.sh@28 -- # version=24.9rc0 00:30:18.517 03:26:59 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:30:18.517 03:26:59 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:30:18.517 03:26:59 version -- app/version.sh@30 -- # py_version=24.9rc0 00:30:18.517 03:26:59 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:30:18.517 00:30:18.517 real 0m0.156s 00:30:18.517 user 0m0.085s 00:30:18.517 sys 0m0.106s 00:30:18.517 03:26:59 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:18.517 03:26:59 version -- common/autotest_common.sh@10 -- # set +x 00:30:18.517 ************************************ 00:30:18.517 END TEST version 00:30:18.517 ************************************ 00:30:18.518 03:26:59 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:30:18.518 03:26:59 -- spdk/autotest.sh@198 -- # uname -s 00:30:18.518 03:26:59 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:30:18.518 03:26:59 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:30:18.518 03:26:59 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:30:18.518 03:26:59 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:30:18.518 03:26:59 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:30:18.518 03:26:59 -- spdk/autotest.sh@260 -- # timing_exit lib 00:30:18.518 03:26:59 -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:18.518 03:26:59 -- common/autotest_common.sh@10 -- # set +x 00:30:18.518 03:26:59 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:30:18.518 03:26:59 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:30:18.518 03:26:59 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:30:18.518 03:26:59 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:30:18.518 03:26:59 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:30:18.518 03:26:59 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:30:18.518 03:26:59 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:30:18.518 03:26:59 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:18.518 03:26:59 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:18.518 03:26:59 -- common/autotest_common.sh@10 -- # set +x 00:30:18.518 ************************************ 00:30:18.518 START TEST nvmf_tcp 00:30:18.518 ************************************ 00:30:18.518 03:26:59 nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:30:18.518 * Looking for test storage... 00:30:18.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:18.518 03:26:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.777 03:26:59 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.777 03:26:59 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.777 03:26:59 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.777 03:26:59 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.777 03:26:59 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.778 03:26:59 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.778 03:26:59 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.778 03:26:59 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:18.778 03:26:59 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.778 03:26:59 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:18.778 03:26:59 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:18.778 03:26:59 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:18.778 03:26:59 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.778 03:26:59 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.778 03:26:59 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.778 03:26:59 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:18.778 03:26:59 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:18.778 03:26:59 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:18.778 03:26:59 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:18.778 03:26:59 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:30:18.778 03:26:59 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:30:18.778 03:26:59 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:18.778 03:26:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.778 03:26:59 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:30:18.778 03:26:59 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:30:18.778 03:26:59 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:18.778 03:26:59 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:18.778 03:26:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.778 ************************************ 00:30:18.778 START TEST nvmf_example 00:30:18.778 ************************************ 00:30:18.778 03:26:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:30:18.778 * Looking for test storage... 00:30:18.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:30:18.778 03:27:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:25.350 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:25.350 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:25.350 Found net devices under 0000:86:00.0: cvl_0_0 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.350 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:25.351 Found net devices under 0000:86:00.1: cvl_0_1 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:25.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:25.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:30:25.351 00:30:25.351 --- 10.0.0.2 ping statistics --- 00:30:25.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.351 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:25.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:25.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:30:25.351 00:30:25.351 --- 10.0.0.1 ping statistics --- 00:30:25.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.351 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1996518 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1996518 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 1996518 ']' 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:25.351 03:27:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:30:25.351 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.351 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:25.351 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:30:25.351 03:27:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:30:25.351 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:25.351 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:30:25.351 03:27:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:25.351 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:25.351 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:30:25.351 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:25.351 03:27:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:30:25.351 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:25.351 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:25.609 03:27:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:25.609 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.590 Initializing NVMe Controllers 00:30:35.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:35.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:35.590 Initialization complete. Launching workers. 00:30:35.590 ======================================================== 00:30:35.590 Latency(us) 00:30:35.590 Device Information : IOPS MiB/s Average min max 00:30:35.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18406.35 71.90 3477.02 667.95 15477.72 00:30:35.591 ======================================================== 00:30:35.591 Total : 18406.35 71.90 3477.02 667.95 15477.72 00:30:35.591 00:30:35.591 03:27:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:30:35.591 03:27:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:30:35.591 03:27:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:35.591 03:27:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:30:35.591 03:27:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:35.591 03:27:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:30:35.591 03:27:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:35.591 03:27:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:35.591 rmmod nvme_tcp 00:30:35.850 rmmod nvme_fabrics 00:30:35.850 rmmod nvme_keyring 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1996518 ']' 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1996518 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 1996518 ']' 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 1996518 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1996518 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1996518' 00:30:35.850 killing process with pid 1996518 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@968 -- # kill 1996518 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@973 -- # wait 1996518 00:30:35.850 nvmf threads initialize successfully 00:30:35.850 bdev subsystem init successfully 00:30:35.850 created a nvmf target service 00:30:35.850 create targets's poll groups done 00:30:35.850 all subsystems of target started 00:30:35.850 nvmf target is running 00:30:35.850 all subsystems of target stopped 00:30:35.850 destroy targets's poll groups done 00:30:35.850 destroyed the nvmf target service 00:30:35.850 bdev subsystem finish successfully 00:30:35.850 nvmf threads destroy successfully 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:35.850 03:27:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.390 03:27:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:38.390 03:27:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:30:38.390 03:27:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:38.390 03:27:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:30:38.390 00:30:38.390 real 0m19.383s 00:30:38.390 user 0m45.624s 00:30:38.390 sys 0m5.706s 00:30:38.390 03:27:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:38.390 03:27:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:30:38.390 ************************************ 00:30:38.390 END TEST nvmf_example 00:30:38.390 ************************************ 00:30:38.390 03:27:19 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:30:38.390 03:27:19 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:38.390 03:27:19 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:38.390 03:27:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:38.390 ************************************ 00:30:38.390 START TEST nvmf_filesystem 00:30:38.390 ************************************ 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:30:38.390 * Looking for test storage... 00:30:38.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:30:38.390 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:30:38.391 #define SPDK_CONFIG_H 00:30:38.391 #define SPDK_CONFIG_APPS 1 00:30:38.391 #define SPDK_CONFIG_ARCH native 00:30:38.391 #undef SPDK_CONFIG_ASAN 00:30:38.391 #undef SPDK_CONFIG_AVAHI 00:30:38.391 #undef SPDK_CONFIG_CET 00:30:38.391 #define SPDK_CONFIG_COVERAGE 1 00:30:38.391 #define SPDK_CONFIG_CROSS_PREFIX 00:30:38.391 #undef SPDK_CONFIG_CRYPTO 00:30:38.391 #undef SPDK_CONFIG_CRYPTO_MLX5 00:30:38.391 #undef SPDK_CONFIG_CUSTOMOCF 00:30:38.391 #undef SPDK_CONFIG_DAOS 00:30:38.391 #define SPDK_CONFIG_DAOS_DIR 00:30:38.391 #define SPDK_CONFIG_DEBUG 1 00:30:38.391 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:30:38.391 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:30:38.391 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:30:38.391 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:30:38.391 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:30:38.391 #undef SPDK_CONFIG_DPDK_UADK 00:30:38.391 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:30:38.391 #define SPDK_CONFIG_EXAMPLES 1 00:30:38.391 #undef SPDK_CONFIG_FC 00:30:38.391 #define SPDK_CONFIG_FC_PATH 00:30:38.391 #define SPDK_CONFIG_FIO_PLUGIN 1 00:30:38.391 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:30:38.391 #undef SPDK_CONFIG_FUSE 00:30:38.391 #undef SPDK_CONFIG_FUZZER 00:30:38.391 #define SPDK_CONFIG_FUZZER_LIB 00:30:38.391 #undef SPDK_CONFIG_GOLANG 00:30:38.391 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:30:38.391 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:30:38.391 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:30:38.391 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:30:38.391 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:30:38.391 #undef SPDK_CONFIG_HAVE_LIBBSD 00:30:38.391 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:30:38.391 #define SPDK_CONFIG_IDXD 1 00:30:38.391 #define SPDK_CONFIG_IDXD_KERNEL 1 00:30:38.391 #undef SPDK_CONFIG_IPSEC_MB 00:30:38.391 #define SPDK_CONFIG_IPSEC_MB_DIR 00:30:38.391 #define SPDK_CONFIG_ISAL 1 00:30:38.391 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:30:38.391 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:30:38.391 #define SPDK_CONFIG_LIBDIR 00:30:38.391 #undef SPDK_CONFIG_LTO 00:30:38.391 #define SPDK_CONFIG_MAX_LCORES 00:30:38.391 #define SPDK_CONFIG_NVME_CUSE 1 00:30:38.391 #undef SPDK_CONFIG_OCF 00:30:38.391 #define SPDK_CONFIG_OCF_PATH 00:30:38.391 #define SPDK_CONFIG_OPENSSL_PATH 00:30:38.391 #undef SPDK_CONFIG_PGO_CAPTURE 00:30:38.391 #define SPDK_CONFIG_PGO_DIR 00:30:38.391 #undef SPDK_CONFIG_PGO_USE 00:30:38.391 #define SPDK_CONFIG_PREFIX /usr/local 00:30:38.391 #undef SPDK_CONFIG_RAID5F 00:30:38.391 #undef SPDK_CONFIG_RBD 00:30:38.391 #define SPDK_CONFIG_RDMA 1 00:30:38.391 #define SPDK_CONFIG_RDMA_PROV verbs 00:30:38.391 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:30:38.391 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:30:38.391 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:30:38.391 #define SPDK_CONFIG_SHARED 1 00:30:38.391 #undef SPDK_CONFIG_SMA 00:30:38.391 #define SPDK_CONFIG_TESTS 1 00:30:38.391 #undef SPDK_CONFIG_TSAN 00:30:38.391 #define SPDK_CONFIG_UBLK 1 00:30:38.391 #define SPDK_CONFIG_UBSAN 1 00:30:38.391 #undef SPDK_CONFIG_UNIT_TESTS 00:30:38.391 #undef SPDK_CONFIG_URING 00:30:38.391 #define SPDK_CONFIG_URING_PATH 00:30:38.391 #undef SPDK_CONFIG_URING_ZNS 00:30:38.391 #undef SPDK_CONFIG_USDT 00:30:38.391 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:30:38.391 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:30:38.391 #define SPDK_CONFIG_VFIO_USER 1 00:30:38.391 #define SPDK_CONFIG_VFIO_USER_DIR 00:30:38.391 #define SPDK_CONFIG_VHOST 1 00:30:38.391 #define SPDK_CONFIG_VIRTIO 1 00:30:38.391 #undef SPDK_CONFIG_VTUNE 00:30:38.391 #define SPDK_CONFIG_VTUNE_DIR 00:30:38.391 #define SPDK_CONFIG_WERROR 1 00:30:38.391 #define SPDK_CONFIG_WPDK_DIR 00:30:38.391 #undef SPDK_CONFIG_XNVME 00:30:38.391 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.391 03:27:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:30:38.392 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:30:38.393 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1999239 ]] 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1999239 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.hWnDr9 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.hWnDr9/tests/target /tmp/spdk.hWnDr9 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1050284032 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4234145792 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=183867826176 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974316032 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12106489856 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97931522048 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987158016 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185289216 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194865664 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9576448 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97984151552 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987158016 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3006464 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597426688 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597430784 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:30:38.394 * Looking for test storage... 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=183867826176 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=14321082368 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.394 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:30:38.395 03:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.964 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:44.965 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:44.965 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:44.965 Found net devices under 0000:86:00.0: cvl_0_0 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:44.965 Found net devices under 0000:86:00.1: cvl_0_1 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:44.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:30:44.965 00:30:44.965 --- 10.0.0.2 ping statistics --- 00:30:44.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.965 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:30:44.965 00:30:44.965 --- 10.0.0.1 ping statistics --- 00:30:44.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.965 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:30:44.965 ************************************ 00:30:44.965 START TEST nvmf_filesystem_no_in_capsule 00:30:44.965 ************************************ 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2002639 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2002639 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 2002639 ']' 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:44.965 03:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:44.965 [2024-06-11 03:27:25.969322] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:30:44.965 [2024-06-11 03:27:25.969360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.965 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.965 [2024-06-11 03:27:26.031945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:44.965 [2024-06-11 03:27:26.075157] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.965 [2024-06-11 03:27:26.075210] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.965 [2024-06-11 03:27:26.075217] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.965 [2024-06-11 03:27:26.075223] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.966 [2024-06-11 03:27:26.075231] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.966 [2024-06-11 03:27:26.075293] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.966 [2024-06-11 03:27:26.075311] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:44.966 [2024-06-11 03:27:26.075401] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:44.966 [2024-06-11 03:27:26.075402] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:44.966 [2024-06-11 03:27:26.222989] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:44.966 Malloc1 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:44.966 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:45.225 [2024-06-11 03:27:26.376395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:30:45.225 { 00:30:45.225 "name": "Malloc1", 00:30:45.225 "aliases": [ 00:30:45.225 "5bbebf85-60ac-4ff7-ad03-7c48a70aa182" 00:30:45.225 ], 00:30:45.225 "product_name": "Malloc disk", 00:30:45.225 "block_size": 512, 00:30:45.225 "num_blocks": 1048576, 00:30:45.225 "uuid": "5bbebf85-60ac-4ff7-ad03-7c48a70aa182", 00:30:45.225 "assigned_rate_limits": { 00:30:45.225 "rw_ios_per_sec": 0, 00:30:45.225 "rw_mbytes_per_sec": 0, 00:30:45.225 "r_mbytes_per_sec": 0, 00:30:45.225 "w_mbytes_per_sec": 0 00:30:45.225 }, 00:30:45.225 "claimed": true, 00:30:45.225 "claim_type": "exclusive_write", 00:30:45.225 "zoned": false, 00:30:45.225 "supported_io_types": { 00:30:45.225 "read": true, 00:30:45.225 "write": true, 00:30:45.225 "unmap": true, 00:30:45.225 "write_zeroes": true, 00:30:45.225 "flush": true, 00:30:45.225 "reset": true, 00:30:45.225 "compare": false, 00:30:45.225 "compare_and_write": false, 00:30:45.225 "abort": true, 00:30:45.225 "nvme_admin": false, 00:30:45.225 "nvme_io": false 00:30:45.225 }, 00:30:45.225 "memory_domains": [ 00:30:45.225 { 00:30:45.225 "dma_device_id": "system", 00:30:45.225 "dma_device_type": 1 00:30:45.225 }, 00:30:45.225 { 00:30:45.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:45.225 "dma_device_type": 2 00:30:45.225 } 00:30:45.225 ], 00:30:45.225 "driver_specific": {} 00:30:45.225 } 00:30:45.225 ]' 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:30:45.225 03:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:46.602 03:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:30:46.602 03:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:30:46.602 03:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:30:46.602 03:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:30:46.602 03:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:30:48.547 03:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:30:49.143 03:27:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:30:50.079 03:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:30:50.079 03:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:30:50.079 03:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:30:50.079 03:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:50.079 03:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:50.079 ************************************ 00:30:50.079 START TEST filesystem_ext4 00:30:50.079 ************************************ 00:30:50.079 03:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:30:50.079 03:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:30:50.079 03:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:30:50.079 03:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:30:50.079 03:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:30:50.079 03:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:30:50.079 03:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:30:50.079 03:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:30:50.079 03:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:30:50.079 03:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:30:50.079 03:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:30:50.079 mke2fs 1.46.5 (30-Dec-2021) 00:30:50.079 Discarding device blocks: 0/522240 done 00:30:50.079 Creating filesystem with 522240 1k blocks and 130560 inodes 00:30:50.079 Filesystem UUID: 999c3af1-99a9-4f36-85cb-c8a4e15c7a55 00:30:50.079 Superblock backups stored on blocks: 00:30:50.079 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:30:50.079 00:30:50.079 Allocating group tables: 0/64 done 00:30:50.079 Writing inode tables: 0/64 done 00:30:50.337 Creating journal (8192 blocks): done 00:30:51.162 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:30:51.162 00:30:51.162 03:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:30:51.162 03:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2002639 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:30:52.098 00:30:52.098 real 0m2.091s 00:30:52.098 user 0m0.025s 00:30:52.098 sys 0m0.065s 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:30:52.098 ************************************ 00:30:52.098 END TEST filesystem_ext4 00:30:52.098 ************************************ 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:52.098 ************************************ 00:30:52.098 START TEST filesystem_btrfs 00:30:52.098 ************************************ 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:30:52.098 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:30:52.357 btrfs-progs v6.6.2 00:30:52.357 See https://btrfs.readthedocs.io for more information. 00:30:52.357 00:30:52.357 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:30:52.357 NOTE: several default settings have changed in version 5.15, please make sure 00:30:52.357 this does not affect your deployments: 00:30:52.357 - DUP for metadata (-m dup) 00:30:52.358 - enabled no-holes (-O no-holes) 00:30:52.358 - enabled free-space-tree (-R free-space-tree) 00:30:52.358 00:30:52.358 Label: (null) 00:30:52.358 UUID: d652b014-2750-467d-8f48-8322cefb3143 00:30:52.358 Node size: 16384 00:30:52.358 Sector size: 4096 00:30:52.358 Filesystem size: 510.00MiB 00:30:52.358 Block group profiles: 00:30:52.358 Data: single 8.00MiB 00:30:52.358 Metadata: DUP 32.00MiB 00:30:52.358 System: DUP 8.00MiB 00:30:52.358 SSD detected: yes 00:30:52.358 Zoned device: no 00:30:52.358 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:30:52.358 Runtime features: free-space-tree 00:30:52.358 Checksum: crc32c 00:30:52.358 Number of devices: 1 00:30:52.358 Devices: 00:30:52.358 ID SIZE PATH 00:30:52.358 1 510.00MiB /dev/nvme0n1p1 00:30:52.358 00:30:52.358 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:30:52.358 03:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2002639 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:30:53.295 00:30:53.295 real 0m1.134s 00:30:53.295 user 0m0.025s 00:30:53.295 sys 0m0.129s 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:30:53.295 ************************************ 00:30:53.295 END TEST filesystem_btrfs 00:30:53.295 ************************************ 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:53.295 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:53.295 ************************************ 00:30:53.295 START TEST filesystem_xfs 00:30:53.295 ************************************ 00:30:53.296 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:30:53.296 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:30:53.296 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:30:53.296 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:30:53.296 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:30:53.296 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:30:53.296 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:30:53.296 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:30:53.296 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:30:53.296 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:30:53.296 03:27:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:30:53.555 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:30:53.555 = sectsz=512 attr=2, projid32bit=1 00:30:53.555 = crc=1 finobt=1, sparse=1, rmapbt=0 00:30:53.555 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:30:53.555 data = bsize=4096 blocks=130560, imaxpct=25 00:30:53.555 = sunit=0 swidth=0 blks 00:30:53.555 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:30:53.555 log =internal log bsize=4096 blocks=16384, version=2 00:30:53.555 = sectsz=512 sunit=0 blks, lazy-count=1 00:30:53.555 realtime =none extsz=4096 blocks=0, rtextents=0 00:30:54.123 Discarding blocks...Done. 00:30:54.123 03:27:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:30:54.123 03:27:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:30:56.657 03:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:30:56.657 03:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:30:56.657 03:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:30:56.657 03:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:30:56.657 03:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:30:56.657 03:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:30:56.657 03:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2002639 00:30:56.657 03:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:30:56.657 03:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:30:56.657 03:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:30:56.657 03:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:30:56.657 00:30:56.657 real 0m3.315s 00:30:56.657 user 0m0.022s 00:30:56.657 sys 0m0.071s 00:30:56.657 03:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:56.657 03:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:30:56.657 ************************************ 00:30:56.657 END TEST filesystem_xfs 00:30:56.657 ************************************ 00:30:56.657 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:30:56.916 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:30:56.916 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:57.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2002639 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 2002639 ']' 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 2002639 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2002639 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2002639' 00:30:57.175 killing process with pid 2002639 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 2002639 00:30:57.175 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 2002639 00:30:57.434 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:30:57.434 00:30:57.434 real 0m12.904s 00:30:57.434 user 0m50.690s 00:30:57.434 sys 0m1.196s 00:30:57.434 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:30:57.434 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:57.434 ************************************ 00:30:57.434 END TEST nvmf_filesystem_no_in_capsule 00:30:57.434 ************************************ 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:30:57.693 ************************************ 00:30:57.693 START TEST nvmf_filesystem_in_capsule 00:30:57.693 ************************************ 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2005081 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2005081 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 2005081 ']' 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:30:57.693 03:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:57.693 [2024-06-11 03:27:38.937453] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:30:57.693 [2024-06-11 03:27:38.937490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.693 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.693 [2024-06-11 03:27:38.999481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:57.693 [2024-06-11 03:27:39.041319] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.693 [2024-06-11 03:27:39.041358] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.693 [2024-06-11 03:27:39.041366] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.693 [2024-06-11 03:27:39.041372] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.693 [2024-06-11 03:27:39.041377] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.693 [2024-06-11 03:27:39.041420] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:30:57.693 [2024-06-11 03:27:39.041520] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:30:57.693 [2024-06-11 03:27:39.041612] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:30:57.693 [2024-06-11 03:27:39.041612] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:57.953 [2024-06-11 03:27:39.181025] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:57.953 Malloc1 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:57.953 [2024-06-11 03:27:39.326493] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:30:57.953 { 00:30:57.953 "name": "Malloc1", 00:30:57.953 "aliases": [ 00:30:57.953 "d5b6ba40-dbaa-493e-a76e-840a10cd3434" 00:30:57.953 ], 00:30:57.953 "product_name": "Malloc disk", 00:30:57.953 "block_size": 512, 00:30:57.953 "num_blocks": 1048576, 00:30:57.953 "uuid": "d5b6ba40-dbaa-493e-a76e-840a10cd3434", 00:30:57.953 "assigned_rate_limits": { 00:30:57.953 "rw_ios_per_sec": 0, 00:30:57.953 "rw_mbytes_per_sec": 0, 00:30:57.953 "r_mbytes_per_sec": 0, 00:30:57.953 "w_mbytes_per_sec": 0 00:30:57.953 }, 00:30:57.953 "claimed": true, 00:30:57.953 "claim_type": "exclusive_write", 00:30:57.953 "zoned": false, 00:30:57.953 "supported_io_types": { 00:30:57.953 "read": true, 00:30:57.953 "write": true, 00:30:57.953 "unmap": true, 00:30:57.953 "write_zeroes": true, 00:30:57.953 "flush": true, 00:30:57.953 "reset": true, 00:30:57.953 "compare": false, 00:30:57.953 "compare_and_write": false, 00:30:57.953 "abort": true, 00:30:57.953 "nvme_admin": false, 00:30:57.953 "nvme_io": false 00:30:57.953 }, 00:30:57.953 "memory_domains": [ 00:30:57.953 { 00:30:57.953 "dma_device_id": "system", 00:30:57.953 "dma_device_type": 1 00:30:57.953 }, 00:30:57.953 { 00:30:57.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:57.953 "dma_device_type": 2 00:30:57.953 } 00:30:57.953 ], 00:30:57.953 "driver_specific": {} 00:30:57.953 } 00:30:57.953 ]' 00:30:57.953 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:30:58.212 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:30:58.212 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:30:58.212 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:30:58.212 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:30:58.212 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:30:58.212 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:30:58.212 03:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:59.149 03:27:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:30:59.149 03:27:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:30:59.149 03:27:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:30:59.149 03:27:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:30:59.149 03:27:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:31:01.684 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:31:01.684 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:31:01.684 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:31:01.684 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:31:01.684 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:31:01.684 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:31:01.684 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:31:01.684 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:31:01.684 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:31:01.684 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:31:01.684 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:31:01.685 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:01.685 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:31:01.685 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:31:01.685 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:31:01.685 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:31:01.685 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:31:01.685 03:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:31:01.943 03:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:31:02.881 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:31:02.881 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:31:02.881 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:31:02.881 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:02.881 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:02.881 ************************************ 00:31:02.881 START TEST filesystem_in_capsule_ext4 00:31:02.881 ************************************ 00:31:02.881 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:31:02.881 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:31:02.881 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:31:02.881 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:31:02.881 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:31:02.881 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:31:02.881 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:31:02.881 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:31:02.881 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:31:02.881 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:31:02.881 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:31:02.881 mke2fs 1.46.5 (30-Dec-2021) 00:31:02.881 Discarding device blocks: 0/522240 done 00:31:02.881 Creating filesystem with 522240 1k blocks and 130560 inodes 00:31:02.881 Filesystem UUID: 1b062656-dce6-4bff-8d14-df63e318d790 00:31:02.881 Superblock backups stored on blocks: 00:31:02.881 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:31:02.881 00:31:02.881 Allocating group tables: 0/64 done 00:31:02.881 Writing inode tables: 0/64 done 00:31:03.140 Creating journal (8192 blocks): done 00:31:03.140 Writing superblocks and filesystem accounting information: 0/64 done 00:31:03.140 00:31:03.140 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:31:03.140 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:31:03.140 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2005081 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:31:03.399 00:31:03.399 real 0m0.478s 00:31:03.399 user 0m0.029s 00:31:03.399 sys 0m0.060s 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:31:03.399 ************************************ 00:31:03.399 END TEST filesystem_in_capsule_ext4 00:31:03.399 ************************************ 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:03.399 ************************************ 00:31:03.399 START TEST filesystem_in_capsule_btrfs 00:31:03.399 ************************************ 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:31:03.399 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:31:03.658 btrfs-progs v6.6.2 00:31:03.658 See https://btrfs.readthedocs.io for more information. 00:31:03.658 00:31:03.658 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:31:03.658 NOTE: several default settings have changed in version 5.15, please make sure 00:31:03.658 this does not affect your deployments: 00:31:03.658 - DUP for metadata (-m dup) 00:31:03.658 - enabled no-holes (-O no-holes) 00:31:03.658 - enabled free-space-tree (-R free-space-tree) 00:31:03.658 00:31:03.658 Label: (null) 00:31:03.658 UUID: 598841c7-9a35-46d7-a842-1ef9e5958b77 00:31:03.658 Node size: 16384 00:31:03.658 Sector size: 4096 00:31:03.658 Filesystem size: 510.00MiB 00:31:03.658 Block group profiles: 00:31:03.658 Data: single 8.00MiB 00:31:03.658 Metadata: DUP 32.00MiB 00:31:03.658 System: DUP 8.00MiB 00:31:03.658 SSD detected: yes 00:31:03.658 Zoned device: no 00:31:03.658 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:31:03.658 Runtime features: free-space-tree 00:31:03.658 Checksum: crc32c 00:31:03.658 Number of devices: 1 00:31:03.658 Devices: 00:31:03.658 ID SIZE PATH 00:31:03.658 1 510.00MiB /dev/nvme0n1p1 00:31:03.658 00:31:03.658 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:31:03.658 03:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2005081 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:31:03.918 00:31:03.918 real 0m0.508s 00:31:03.918 user 0m0.034s 00:31:03.918 sys 0m0.116s 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:31:03.918 ************************************ 00:31:03.918 END TEST filesystem_in_capsule_btrfs 00:31:03.918 ************************************ 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:03.918 ************************************ 00:31:03.918 START TEST filesystem_in_capsule_xfs 00:31:03.918 ************************************ 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:31:03.918 03:27:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:31:04.177 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:31:04.177 = sectsz=512 attr=2, projid32bit=1 00:31:04.177 = crc=1 finobt=1, sparse=1, rmapbt=0 00:31:04.177 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:31:04.177 data = bsize=4096 blocks=130560, imaxpct=25 00:31:04.177 = sunit=0 swidth=0 blks 00:31:04.177 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:31:04.177 log =internal log bsize=4096 blocks=16384, version=2 00:31:04.177 = sectsz=512 sunit=0 blks, lazy-count=1 00:31:04.177 realtime =none extsz=4096 blocks=0, rtextents=0 00:31:04.743 Discarding blocks...Done. 00:31:04.743 03:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:31:04.743 03:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:31:07.277 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:31:07.277 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:31:07.277 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:31:07.277 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:31:07.277 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:31:07.277 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:31:07.277 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2005081 00:31:07.277 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:31:07.277 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:31:07.277 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:31:07.277 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:31:07.277 00:31:07.277 real 0m3.379s 00:31:07.277 user 0m0.021s 00:31:07.277 sys 0m0.073s 00:31:07.277 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:07.277 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:31:07.277 ************************************ 00:31:07.277 END TEST filesystem_in_capsule_xfs 00:31:07.277 ************************************ 00:31:07.277 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:31:07.536 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:31:07.536 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:07.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:07.536 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:07.536 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:31:07.536 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:31:07.536 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:07.536 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2005081 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 2005081 ']' 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 2005081 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2005081 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2005081' 00:31:07.537 killing process with pid 2005081 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 2005081 00:31:07.537 03:27:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 2005081 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:31:08.106 00:31:08.106 real 0m10.360s 00:31:08.106 user 0m40.585s 00:31:08.106 sys 0m1.150s 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:31:08.106 ************************************ 00:31:08.106 END TEST nvmf_filesystem_in_capsule 00:31:08.106 ************************************ 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:08.106 rmmod nvme_tcp 00:31:08.106 rmmod nvme_fabrics 00:31:08.106 rmmod nvme_keyring 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:08.106 03:27:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.011 03:27:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:10.011 00:31:10.011 real 0m31.978s 00:31:10.011 user 1m33.100s 00:31:10.011 sys 0m7.244s 00:31:10.011 03:27:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:10.011 03:27:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:31:10.011 ************************************ 00:31:10.011 END TEST nvmf_filesystem 00:31:10.011 ************************************ 00:31:10.271 03:27:51 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:31:10.271 03:27:51 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:10.271 03:27:51 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:10.271 03:27:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:10.271 ************************************ 00:31:10.271 START TEST nvmf_target_discovery 00:31:10.271 ************************************ 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:31:10.271 * Looking for test storage... 00:31:10.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:31:10.271 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:10.272 03:27:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.912 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.912 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:16.912 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:16.912 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:16.912 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:16.912 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:16.912 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:16.912 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:16.912 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:16.912 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:16.912 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:16.912 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:16.913 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:16.913 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:16.913 Found net devices under 0000:86:00.0: cvl_0_0 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:16.913 Found net devices under 0000:86:00.1: cvl_0_1 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:16.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:31:16.913 00:31:16.913 --- 10.0.0.2 ping statistics --- 00:31:16.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.913 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:31:16.913 03:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:31:16.913 00:31:16.913 --- 10.0.0.1 ping statistics --- 00:31:16.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.913 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:31:16.913 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.913 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:31:16.913 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:16.913 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.913 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:16.913 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:16.913 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2010798 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2010798 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 2010798 ']' 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.914 [2024-06-11 03:27:58.092938] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:31:16.914 [2024-06-11 03:27:58.092984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.914 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.914 [2024-06-11 03:27:58.158704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.914 [2024-06-11 03:27:58.199802] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.914 [2024-06-11 03:27:58.199845] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.914 [2024-06-11 03:27:58.199852] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.914 [2024-06-11 03:27:58.199857] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.914 [2024-06-11 03:27:58.199862] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.914 [2024-06-11 03:27:58.199912] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.914 [2024-06-11 03:27:58.200019] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.914 [2024-06-11 03:27:58.200076] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.914 [2024-06-11 03:27:58.200076] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:16.914 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.189 [2024-06-11 03:27:58.350023] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.189 Null1 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.189 [2024-06-11 03:27:58.395404] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.189 Null2 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.189 Null3 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.189 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.190 Null4 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.190 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:31:17.449 00:31:17.449 Discovery Log Number of Records 6, Generation counter 6 00:31:17.449 =====Discovery Log Entry 0====== 00:31:17.449 trtype: tcp 00:31:17.449 adrfam: ipv4 00:31:17.449 subtype: current discovery subsystem 00:31:17.449 treq: not required 00:31:17.449 portid: 0 00:31:17.449 trsvcid: 4420 00:31:17.449 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:17.449 traddr: 10.0.0.2 00:31:17.449 eflags: explicit discovery connections, duplicate discovery information 00:31:17.449 sectype: none 00:31:17.449 =====Discovery Log Entry 1====== 00:31:17.449 trtype: tcp 00:31:17.449 adrfam: ipv4 00:31:17.449 subtype: nvme subsystem 00:31:17.449 treq: not required 00:31:17.449 portid: 0 00:31:17.449 trsvcid: 4420 00:31:17.449 subnqn: nqn.2016-06.io.spdk:cnode1 00:31:17.449 traddr: 10.0.0.2 00:31:17.449 eflags: none 00:31:17.449 sectype: none 00:31:17.449 =====Discovery Log Entry 2====== 00:31:17.449 trtype: tcp 00:31:17.449 adrfam: ipv4 00:31:17.449 subtype: nvme subsystem 00:31:17.449 treq: not required 00:31:17.449 portid: 0 00:31:17.449 trsvcid: 4420 00:31:17.449 subnqn: nqn.2016-06.io.spdk:cnode2 00:31:17.449 traddr: 10.0.0.2 00:31:17.449 eflags: none 00:31:17.449 sectype: none 00:31:17.449 =====Discovery Log Entry 3====== 00:31:17.449 trtype: tcp 00:31:17.449 adrfam: ipv4 00:31:17.449 subtype: nvme subsystem 00:31:17.449 treq: not required 00:31:17.449 portid: 0 00:31:17.449 trsvcid: 4420 00:31:17.449 subnqn: nqn.2016-06.io.spdk:cnode3 00:31:17.449 traddr: 10.0.0.2 00:31:17.449 eflags: none 00:31:17.449 sectype: none 00:31:17.449 =====Discovery Log Entry 4====== 00:31:17.449 trtype: tcp 00:31:17.449 adrfam: ipv4 00:31:17.449 subtype: nvme subsystem 00:31:17.449 treq: not required 00:31:17.449 portid: 0 00:31:17.449 trsvcid: 4420 00:31:17.449 subnqn: nqn.2016-06.io.spdk:cnode4 00:31:17.449 traddr: 10.0.0.2 00:31:17.449 eflags: none 00:31:17.449 sectype: none 00:31:17.449 =====Discovery Log Entry 5====== 00:31:17.449 trtype: tcp 00:31:17.449 adrfam: ipv4 00:31:17.449 subtype: discovery subsystem referral 00:31:17.449 treq: not required 00:31:17.449 portid: 0 00:31:17.449 trsvcid: 4430 00:31:17.449 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:17.449 traddr: 10.0.0.2 00:31:17.449 eflags: none 00:31:17.449 sectype: none 00:31:17.449 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:31:17.449 Perform nvmf subsystem discovery via RPC 00:31:17.449 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:31:17.449 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.449 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.449 [ 00:31:17.449 { 00:31:17.449 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:17.449 "subtype": "Discovery", 00:31:17.449 "listen_addresses": [ 00:31:17.449 { 00:31:17.449 "trtype": "TCP", 00:31:17.449 "adrfam": "IPv4", 00:31:17.449 "traddr": "10.0.0.2", 00:31:17.449 "trsvcid": "4420" 00:31:17.449 } 00:31:17.449 ], 00:31:17.449 "allow_any_host": true, 00:31:17.449 "hosts": [] 00:31:17.449 }, 00:31:17.449 { 00:31:17.449 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:17.449 "subtype": "NVMe", 00:31:17.449 "listen_addresses": [ 00:31:17.449 { 00:31:17.449 "trtype": "TCP", 00:31:17.449 "adrfam": "IPv4", 00:31:17.449 "traddr": "10.0.0.2", 00:31:17.449 "trsvcid": "4420" 00:31:17.449 } 00:31:17.449 ], 00:31:17.449 "allow_any_host": true, 00:31:17.449 "hosts": [], 00:31:17.449 "serial_number": "SPDK00000000000001", 00:31:17.449 "model_number": "SPDK bdev Controller", 00:31:17.449 "max_namespaces": 32, 00:31:17.449 "min_cntlid": 1, 00:31:17.449 "max_cntlid": 65519, 00:31:17.449 "namespaces": [ 00:31:17.449 { 00:31:17.449 "nsid": 1, 00:31:17.449 "bdev_name": "Null1", 00:31:17.449 "name": "Null1", 00:31:17.449 "nguid": "C63801555F6D42468279D4F00B45240A", 00:31:17.449 "uuid": "c6380155-5f6d-4246-8279-d4f00b45240a" 00:31:17.449 } 00:31:17.449 ] 00:31:17.449 }, 00:31:17.449 { 00:31:17.449 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:17.449 "subtype": "NVMe", 00:31:17.449 "listen_addresses": [ 00:31:17.449 { 00:31:17.449 "trtype": "TCP", 00:31:17.449 "adrfam": "IPv4", 00:31:17.449 "traddr": "10.0.0.2", 00:31:17.449 "trsvcid": "4420" 00:31:17.449 } 00:31:17.449 ], 00:31:17.449 "allow_any_host": true, 00:31:17.449 "hosts": [], 00:31:17.449 "serial_number": "SPDK00000000000002", 00:31:17.449 "model_number": "SPDK bdev Controller", 00:31:17.449 "max_namespaces": 32, 00:31:17.449 "min_cntlid": 1, 00:31:17.449 "max_cntlid": 65519, 00:31:17.449 "namespaces": [ 00:31:17.449 { 00:31:17.449 "nsid": 1, 00:31:17.449 "bdev_name": "Null2", 00:31:17.449 "name": "Null2", 00:31:17.449 "nguid": "2E2F519648ED45B68C2511BDF59CAC64", 00:31:17.449 "uuid": "2e2f5196-48ed-45b6-8c25-11bdf59cac64" 00:31:17.449 } 00:31:17.449 ] 00:31:17.449 }, 00:31:17.449 { 00:31:17.449 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:31:17.449 "subtype": "NVMe", 00:31:17.449 "listen_addresses": [ 00:31:17.449 { 00:31:17.449 "trtype": "TCP", 00:31:17.449 "adrfam": "IPv4", 00:31:17.449 "traddr": "10.0.0.2", 00:31:17.449 "trsvcid": "4420" 00:31:17.449 } 00:31:17.449 ], 00:31:17.449 "allow_any_host": true, 00:31:17.449 "hosts": [], 00:31:17.449 "serial_number": "SPDK00000000000003", 00:31:17.449 "model_number": "SPDK bdev Controller", 00:31:17.449 "max_namespaces": 32, 00:31:17.449 "min_cntlid": 1, 00:31:17.449 "max_cntlid": 65519, 00:31:17.449 "namespaces": [ 00:31:17.449 { 00:31:17.449 "nsid": 1, 00:31:17.449 "bdev_name": "Null3", 00:31:17.450 "name": "Null3", 00:31:17.450 "nguid": "7BF48CAC9F37410C9E2908A6A5E401D6", 00:31:17.450 "uuid": "7bf48cac-9f37-410c-9e29-08a6a5e401d6" 00:31:17.450 } 00:31:17.450 ] 00:31:17.450 }, 00:31:17.450 { 00:31:17.450 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:31:17.450 "subtype": "NVMe", 00:31:17.450 "listen_addresses": [ 00:31:17.450 { 00:31:17.450 "trtype": "TCP", 00:31:17.450 "adrfam": "IPv4", 00:31:17.450 "traddr": "10.0.0.2", 00:31:17.450 "trsvcid": "4420" 00:31:17.450 } 00:31:17.450 ], 00:31:17.450 "allow_any_host": true, 00:31:17.450 "hosts": [], 00:31:17.450 "serial_number": "SPDK00000000000004", 00:31:17.450 "model_number": "SPDK bdev Controller", 00:31:17.450 "max_namespaces": 32, 00:31:17.450 "min_cntlid": 1, 00:31:17.450 "max_cntlid": 65519, 00:31:17.450 "namespaces": [ 00:31:17.450 { 00:31:17.450 "nsid": 1, 00:31:17.450 "bdev_name": "Null4", 00:31:17.450 "name": "Null4", 00:31:17.450 "nguid": "1B605A243E374177B6B7FE2AE08A5474", 00:31:17.450 "uuid": "1b605a24-3e37-4177-b6b7-fe2ae08a5474" 00:31:17.450 } 00:31:17.450 ] 00:31:17.450 } 00:31:17.450 ] 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:17.450 rmmod nvme_tcp 00:31:17.450 rmmod nvme_fabrics 00:31:17.450 rmmod nvme_keyring 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2010798 ']' 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2010798 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 2010798 ']' 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 2010798 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:31:17.450 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:17.709 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2010798 00:31:17.709 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:17.709 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:17.709 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2010798' 00:31:17.709 killing process with pid 2010798 00:31:17.709 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 2010798 00:31:17.709 03:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 2010798 00:31:17.709 03:27:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:17.709 03:27:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:17.709 03:27:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:17.709 03:27:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:17.709 03:27:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:17.709 03:27:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.709 03:27:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:17.709 03:27:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.244 03:28:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:20.244 00:31:20.244 real 0m9.668s 00:31:20.244 user 0m5.271s 00:31:20.244 sys 0m5.168s 00:31:20.245 03:28:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:20.245 03:28:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:20.245 ************************************ 00:31:20.245 END TEST nvmf_target_discovery 00:31:20.245 ************************************ 00:31:20.245 03:28:01 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:31:20.245 03:28:01 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:20.245 03:28:01 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:20.245 03:28:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:20.245 ************************************ 00:31:20.245 START TEST nvmf_referrals 00:31:20.245 ************************************ 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:31:20.245 * Looking for test storage... 00:31:20.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:31:20.245 03:28:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:26.822 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:26.822 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:26.822 Found net devices under 0000:86:00.0: cvl_0_0 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:26.822 Found net devices under 0000:86:00.1: cvl_0_1 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:26.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:31:26.822 00:31:26.822 --- 10.0.0.2 ping statistics --- 00:31:26.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.822 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:26.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:26.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:31:26.822 00:31:26.822 --- 10.0.0.1 ping statistics --- 00:31:26.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.822 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:26.822 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2014868 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2014868 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 2014868 ']' 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:26.823 [2024-06-11 03:28:07.723937] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:31:26.823 [2024-06-11 03:28:07.723988] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.823 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.823 [2024-06-11 03:28:07.790958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:26.823 [2024-06-11 03:28:07.833431] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:26.823 [2024-06-11 03:28:07.833470] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:26.823 [2024-06-11 03:28:07.833477] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:26.823 [2024-06-11 03:28:07.833483] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:26.823 [2024-06-11 03:28:07.833488] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:26.823 [2024-06-11 03:28:07.833541] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.823 [2024-06-11 03:28:07.833564] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:26.823 [2024-06-11 03:28:07.833666] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:26.823 [2024-06-11 03:28:07.833667] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:26.823 [2024-06-11 03:28:07.978887] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:26.823 [2024-06-11 03:28:07.992157] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:26.823 03:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:31:26.823 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:27.082 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.342 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:31:27.342 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:31:27.342 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:31:27.342 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:31:27.342 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:31:27.342 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:31:27.342 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:31:27.342 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:31:27.342 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:31:27.342 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:31:27.342 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:31:27.342 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:31:27.342 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:31:27.342 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:31:27.342 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:31:27.601 03:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:31:27.860 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:31:27.860 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:31:27.860 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:31:27.860 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:31:27.860 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:31:27.860 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:31:27.860 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:31:27.860 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:31:27.860 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:31:27.860 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:31:27.860 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:31:27.860 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:31:27.860 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:31:28.121 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:31:28.121 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:31:28.121 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:28.121 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:28.121 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:28.121 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:31:28.121 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:31:28.121 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:28.121 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:28.121 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:28.121 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:31:28.121 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:31:28.121 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:31:28.121 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:31:28.122 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:31:28.122 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:31:28.122 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:28.381 rmmod nvme_tcp 00:31:28.381 rmmod nvme_fabrics 00:31:28.381 rmmod nvme_keyring 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2014868 ']' 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2014868 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 2014868 ']' 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 2014868 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2014868 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2014868' 00:31:28.381 killing process with pid 2014868 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 2014868 00:31:28.381 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 2014868 00:31:28.640 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:28.640 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:28.640 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:28.640 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:28.640 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:28.640 03:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.640 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:28.640 03:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.545 03:28:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:30.545 00:31:30.545 real 0m10.695s 00:31:30.545 user 0m10.450s 00:31:30.545 sys 0m5.396s 00:31:30.545 03:28:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:31:30.545 03:28:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:31:30.545 ************************************ 00:31:30.545 END TEST nvmf_referrals 00:31:30.545 ************************************ 00:31:30.545 03:28:11 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:31:30.545 03:28:11 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:31:30.545 03:28:11 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:31:30.545 03:28:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:30.808 ************************************ 00:31:30.808 START TEST nvmf_connect_disconnect 00:31:30.808 ************************************ 00:31:30.808 03:28:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:31:30.808 * Looking for test storage... 00:31:30.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:30.808 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:30.808 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:30.808 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:30.808 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:30.808 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:30.808 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:30.808 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:30.808 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:30.808 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:30.808 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:31:30.809 03:28:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:37.379 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:37.379 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:37.379 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:37.380 Found net devices under 0000:86:00.0: cvl_0_0 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:37.380 Found net devices under 0000:86:00.1: cvl_0_1 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:37.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:37.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:31:37.380 00:31:37.380 --- 10.0.0.2 ping statistics --- 00:31:37.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.380 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:37.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:37.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:31:37.380 00:31:37.380 --- 10.0.0.1 ping statistics --- 00:31:37.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.380 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2019242 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2019242 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 2019242 ']' 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:37.380 [2024-06-11 03:28:18.518501] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:31:37.380 [2024-06-11 03:28:18.518540] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:37.380 EAL: No free 2048 kB hugepages reported on node 1 00:31:37.380 [2024-06-11 03:28:18.579884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:37.380 [2024-06-11 03:28:18.622080] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:37.380 [2024-06-11 03:28:18.622120] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:37.380 [2024-06-11 03:28:18.622128] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:37.380 [2024-06-11 03:28:18.622134] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:37.380 [2024-06-11 03:28:18.622140] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:37.380 [2024-06-11 03:28:18.622181] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.380 [2024-06-11 03:28:18.622256] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:31:37.380 [2024-06-11 03:28:18.622345] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:31:37.380 [2024-06-11 03:28:18.622346] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:37.380 [2024-06-11 03:28:18.769935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:37.380 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:37.640 [2024-06-11 03:28:18.821722] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:31:37.640 03:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:31:40.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:42.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:44.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:47.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:49.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:51.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:54.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:56.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:58.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:01.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:03.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:05.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:07.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:10.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:12.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:14.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:17.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:19.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:21.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:24.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:26.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:28.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:31.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:32.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:35.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:37.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:39.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:42.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:45.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:46.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:49.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:51.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:54.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:55.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:58.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:00.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:02.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:05.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:07.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:09.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:12.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:14.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:16.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:18.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:21.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:23.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:25.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:27.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:30.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:32.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:34.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:37.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:39.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:41.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:44.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:46.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:48.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:50.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:53.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:55.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:57.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:00.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:02.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:04.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:07.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:09.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:11.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:14.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:16.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:18.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:20.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:23.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:25.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:27.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:30.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:31.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:34.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:36.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:38.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:41.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:43.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:45.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:48.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:50.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:52.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:54.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:57.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:59.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:01.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:04.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:06.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:08.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:11.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:13.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:15.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:18.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:20.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:22.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:25.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:27.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:27.225 rmmod nvme_tcp 00:35:27.225 rmmod nvme_fabrics 00:35:27.225 rmmod nvme_keyring 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2019242 ']' 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2019242 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 2019242 ']' 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 2019242 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2019242 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2019242' 00:35:27.225 killing process with pid 2019242 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 2019242 00:35:27.225 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 2019242 00:35:27.484 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:27.484 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:27.484 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:27.484 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:27.484 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:27.484 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.484 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:27.484 03:32:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.020 03:32:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:30.020 00:35:30.020 real 3m58.892s 00:35:30.020 user 15m14.427s 00:35:30.020 sys 0m20.039s 00:35:30.020 03:32:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:30.020 03:32:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:30.020 ************************************ 00:35:30.020 END TEST nvmf_connect_disconnect 00:35:30.020 ************************************ 00:35:30.020 03:32:10 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:35:30.020 03:32:10 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:30.020 03:32:10 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:30.020 03:32:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:30.020 ************************************ 00:35:30.020 START TEST nvmf_multitarget 00:35:30.020 ************************************ 00:35:30.020 03:32:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:35:30.020 * Looking for test storage... 00:35:30.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:35:30.020 03:32:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:35:36.594 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:36.594 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:35:36.594 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:36.594 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:36.594 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:36.595 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:36.595 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:36.595 Found net devices under 0000:86:00.0: cvl_0_0 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:36.595 Found net devices under 0000:86:00.1: cvl_0_1 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:36.595 03:32:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:36.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:36.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:35:36.595 00:35:36.595 --- 10.0.0.2 ping statistics --- 00:35:36.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.595 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:36.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:36.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:35:36.595 00:35:36.595 --- 10.0.0.1 ping statistics --- 00:35:36.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.595 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2062870 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2062870 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 2062870 ']' 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:36.595 03:32:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:36.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:35:36.596 [2024-06-11 03:32:17.201662] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:35:36.596 [2024-06-11 03:32:17.201703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:36.596 EAL: No free 2048 kB hugepages reported on node 1 00:35:36.596 [2024-06-11 03:32:17.262528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:36.596 [2024-06-11 03:32:17.302862] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:36.596 [2024-06-11 03:32:17.302900] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:36.596 [2024-06-11 03:32:17.302907] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:36.596 [2024-06-11 03:32:17.302913] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:36.596 [2024-06-11 03:32:17.302917] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:36.596 [2024-06-11 03:32:17.303008] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.596 [2024-06-11 03:32:17.303107] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:36.596 [2024-06-11 03:32:17.303193] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:35:36.596 [2024-06-11 03:32:17.303193] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:35:36.596 "nvmf_tgt_1" 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:35:36.596 "nvmf_tgt_2" 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:35:36.596 true 00:35:36.596 03:32:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:35:36.855 true 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:36.855 rmmod nvme_tcp 00:35:36.855 rmmod nvme_fabrics 00:35:36.855 rmmod nvme_keyring 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2062870 ']' 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2062870 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 2062870 ']' 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 2062870 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:35:36.855 03:32:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2062870 00:35:37.114 03:32:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:35:37.114 03:32:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:35:37.114 03:32:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2062870' 00:35:37.114 killing process with pid 2062870 00:35:37.114 03:32:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 2062870 00:35:37.114 03:32:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 2062870 00:35:37.114 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:37.114 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:37.114 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:37.114 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:37.114 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:37.114 03:32:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.114 03:32:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:37.114 03:32:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:39.652 03:32:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:39.652 00:35:39.652 real 0m9.608s 00:35:39.652 user 0m6.950s 00:35:39.652 sys 0m4.971s 00:35:39.652 03:32:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:35:39.652 03:32:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:35:39.652 ************************************ 00:35:39.652 END TEST nvmf_multitarget 00:35:39.652 ************************************ 00:35:39.652 03:32:20 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:35:39.652 03:32:20 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:35:39.652 03:32:20 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:35:39.652 03:32:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:39.652 ************************************ 00:35:39.652 START TEST nvmf_rpc 00:35:39.652 ************************************ 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:35:39.652 * Looking for test storage... 00:35:39.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:39.652 03:32:20 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:35:39.653 03:32:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:46.223 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:46.223 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:46.223 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:46.224 Found net devices under 0000:86:00.0: cvl_0_0 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:46.224 Found net devices under 0000:86:00.1: cvl_0_1 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:46.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:46.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:35:46.224 00:35:46.224 --- 10.0.0.2 ping statistics --- 00:35:46.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.224 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:46.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:46.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:35:46.224 00:35:46.224 --- 10.0.0.1 ping statistics --- 00:35:46.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:46.224 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2066930 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2066930 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 2066930 ']' 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:46.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:46.224 [2024-06-11 03:32:26.797642] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:35:46.224 [2024-06-11 03:32:26.797686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:46.224 EAL: No free 2048 kB hugepages reported on node 1 00:35:46.224 [2024-06-11 03:32:26.860246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:46.224 [2024-06-11 03:32:26.902539] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:46.224 [2024-06-11 03:32:26.902577] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:46.224 [2024-06-11 03:32:26.902585] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:46.224 [2024-06-11 03:32:26.902591] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:46.224 [2024-06-11 03:32:26.902596] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:46.224 [2024-06-11 03:32:26.902639] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.224 [2024-06-11 03:32:26.902738] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:35:46.224 [2024-06-11 03:32:26.902825] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:35:46.224 [2024-06-11 03:32:26.902825] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:35:46.224 03:32:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:46.224 03:32:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:46.224 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:35:46.224 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.224 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:46.224 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.224 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:35:46.224 "tick_rate": 2100000000, 00:35:46.224 "poll_groups": [ 00:35:46.224 { 00:35:46.224 "name": "nvmf_tgt_poll_group_000", 00:35:46.224 "admin_qpairs": 0, 00:35:46.224 "io_qpairs": 0, 00:35:46.224 "current_admin_qpairs": 0, 00:35:46.224 "current_io_qpairs": 0, 00:35:46.224 "pending_bdev_io": 0, 00:35:46.224 "completed_nvme_io": 0, 00:35:46.224 "transports": [] 00:35:46.224 }, 00:35:46.224 { 00:35:46.224 "name": "nvmf_tgt_poll_group_001", 00:35:46.224 "admin_qpairs": 0, 00:35:46.224 "io_qpairs": 0, 00:35:46.224 "current_admin_qpairs": 0, 00:35:46.224 "current_io_qpairs": 0, 00:35:46.224 "pending_bdev_io": 0, 00:35:46.224 "completed_nvme_io": 0, 00:35:46.224 "transports": [] 00:35:46.224 }, 00:35:46.224 { 00:35:46.224 "name": "nvmf_tgt_poll_group_002", 00:35:46.224 "admin_qpairs": 0, 00:35:46.224 "io_qpairs": 0, 00:35:46.224 "current_admin_qpairs": 0, 00:35:46.224 "current_io_qpairs": 0, 00:35:46.224 "pending_bdev_io": 0, 00:35:46.224 "completed_nvme_io": 0, 00:35:46.224 "transports": [] 00:35:46.224 }, 00:35:46.224 { 00:35:46.224 "name": "nvmf_tgt_poll_group_003", 00:35:46.224 "admin_qpairs": 0, 00:35:46.224 "io_qpairs": 0, 00:35:46.224 "current_admin_qpairs": 0, 00:35:46.224 "current_io_qpairs": 0, 00:35:46.224 "pending_bdev_io": 0, 00:35:46.224 "completed_nvme_io": 0, 00:35:46.224 "transports": [] 00:35:46.224 } 00:35:46.224 ] 00:35:46.224 }' 00:35:46.224 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:35:46.224 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:35:46.224 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:35:46.224 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:35:46.224 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:35:46.224 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:35:46.224 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:46.225 [2024-06-11 03:32:27.142380] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:35:46.225 "tick_rate": 2100000000, 00:35:46.225 "poll_groups": [ 00:35:46.225 { 00:35:46.225 "name": "nvmf_tgt_poll_group_000", 00:35:46.225 "admin_qpairs": 0, 00:35:46.225 "io_qpairs": 0, 00:35:46.225 "current_admin_qpairs": 0, 00:35:46.225 "current_io_qpairs": 0, 00:35:46.225 "pending_bdev_io": 0, 00:35:46.225 "completed_nvme_io": 0, 00:35:46.225 "transports": [ 00:35:46.225 { 00:35:46.225 "trtype": "TCP" 00:35:46.225 } 00:35:46.225 ] 00:35:46.225 }, 00:35:46.225 { 00:35:46.225 "name": "nvmf_tgt_poll_group_001", 00:35:46.225 "admin_qpairs": 0, 00:35:46.225 "io_qpairs": 0, 00:35:46.225 "current_admin_qpairs": 0, 00:35:46.225 "current_io_qpairs": 0, 00:35:46.225 "pending_bdev_io": 0, 00:35:46.225 "completed_nvme_io": 0, 00:35:46.225 "transports": [ 00:35:46.225 { 00:35:46.225 "trtype": "TCP" 00:35:46.225 } 00:35:46.225 ] 00:35:46.225 }, 00:35:46.225 { 00:35:46.225 "name": "nvmf_tgt_poll_group_002", 00:35:46.225 "admin_qpairs": 0, 00:35:46.225 "io_qpairs": 0, 00:35:46.225 "current_admin_qpairs": 0, 00:35:46.225 "current_io_qpairs": 0, 00:35:46.225 "pending_bdev_io": 0, 00:35:46.225 "completed_nvme_io": 0, 00:35:46.225 "transports": [ 00:35:46.225 { 00:35:46.225 "trtype": "TCP" 00:35:46.225 } 00:35:46.225 ] 00:35:46.225 }, 00:35:46.225 { 00:35:46.225 "name": "nvmf_tgt_poll_group_003", 00:35:46.225 "admin_qpairs": 0, 00:35:46.225 "io_qpairs": 0, 00:35:46.225 "current_admin_qpairs": 0, 00:35:46.225 "current_io_qpairs": 0, 00:35:46.225 "pending_bdev_io": 0, 00:35:46.225 "completed_nvme_io": 0, 00:35:46.225 "transports": [ 00:35:46.225 { 00:35:46.225 "trtype": "TCP" 00:35:46.225 } 00:35:46.225 ] 00:35:46.225 } 00:35:46.225 ] 00:35:46.225 }' 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:46.225 Malloc1 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:46.225 [2024-06-11 03:32:27.302158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:35:46.225 [2024-06-11 03:32:27.330625] ctrlr.c: 820:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562' 00:35:46.225 Failed to write to /dev/nvme-fabrics: Input/output error 00:35:46.225 could not add new controller: failed to write to nvme-fabrics device 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:46.225 03:32:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:47.162 03:32:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:35:47.162 03:32:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:35:47.162 03:32:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:35:47.162 03:32:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:35:47.162 03:32:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:35:49.067 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:35:49.067 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:35:49.067 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:49.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:49.327 [2024-06-11 03:32:30.603579] ctrlr.c: 820:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562' 00:35:49.327 Failed to write to /dev/nvme-fabrics: Input/output error 00:35:49.327 could not add new controller: failed to write to nvme-fabrics device 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:49.327 03:32:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:50.356 03:32:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:35:50.356 03:32:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:35:50.356 03:32:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:35:50.356 03:32:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:35:50.356 03:32:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:52.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:35:52.889 03:32:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:35:52.890 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:52.890 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:52.890 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:52.890 03:32:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:52.890 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:52.890 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:52.890 [2024-06-11 03:32:33.916013] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.890 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:52.890 03:32:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:35:52.890 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:52.890 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:52.890 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:52.890 03:32:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:35:52.890 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:52.890 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:52.890 03:32:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:52.890 03:32:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:53.827 03:32:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:35:53.827 03:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:35:53.827 03:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:35:53.827 03:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:35:53.827 03:32:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:35:55.731 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:35:55.731 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:35:55.731 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:35:55.731 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:35:55.731 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:35:55.731 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:35:55.731 03:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:55.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:55.990 [2024-06-11 03:32:37.209205] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:55.990 03:32:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:56.926 03:32:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:35:56.926 03:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:35:56.926 03:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:35:56.926 03:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:35:56.926 03:32:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:59.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:59.459 [2024-06-11 03:32:40.452171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:59.459 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.460 03:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:35:59.460 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:59.460 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:35:59.460 03:32:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:59.460 03:32:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:00.404 03:32:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:36:00.404 03:32:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:36:00.404 03:32:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:36:00.404 03:32:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:36:00.404 03:32:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:02.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:02.308 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:02.309 [2024-06-11 03:32:43.706397] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:02.309 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:02.309 03:32:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:36:02.309 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:02.309 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:02.567 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:02.567 03:32:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:36:02.567 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:02.567 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:02.567 03:32:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:02.568 03:32:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:03.505 03:32:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:36:03.505 03:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:36:03.505 03:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:36:03.505 03:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:36:03.505 03:32:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:36:06.050 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:36:06.050 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:36:06.050 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:36:06.050 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:06.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:06.051 03:32:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:06.051 [2024-06-11 03:32:47.013664] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:06.051 03:32:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:06.987 03:32:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:36:06.987 03:32:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:36:06.987 03:32:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:36:06.987 03:32:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:36:06.987 03:32:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:08.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:08.892 [2024-06-11 03:32:50.276293] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:08.892 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 [2024-06-11 03:32:50.324401] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 [2024-06-11 03:32:50.376571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 [2024-06-11 03:32:50.424757] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.152 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.153 [2024-06-11 03:32:50.472923] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:36:09.153 "tick_rate": 2100000000, 00:36:09.153 "poll_groups": [ 00:36:09.153 { 00:36:09.153 "name": "nvmf_tgt_poll_group_000", 00:36:09.153 "admin_qpairs": 2, 00:36:09.153 "io_qpairs": 168, 00:36:09.153 "current_admin_qpairs": 0, 00:36:09.153 "current_io_qpairs": 0, 00:36:09.153 "pending_bdev_io": 0, 00:36:09.153 "completed_nvme_io": 219, 00:36:09.153 "transports": [ 00:36:09.153 { 00:36:09.153 "trtype": "TCP" 00:36:09.153 } 00:36:09.153 ] 00:36:09.153 }, 00:36:09.153 { 00:36:09.153 "name": "nvmf_tgt_poll_group_001", 00:36:09.153 "admin_qpairs": 2, 00:36:09.153 "io_qpairs": 168, 00:36:09.153 "current_admin_qpairs": 0, 00:36:09.153 "current_io_qpairs": 0, 00:36:09.153 "pending_bdev_io": 0, 00:36:09.153 "completed_nvme_io": 361, 00:36:09.153 "transports": [ 00:36:09.153 { 00:36:09.153 "trtype": "TCP" 00:36:09.153 } 00:36:09.153 ] 00:36:09.153 }, 00:36:09.153 { 00:36:09.153 "name": "nvmf_tgt_poll_group_002", 00:36:09.153 "admin_qpairs": 1, 00:36:09.153 "io_qpairs": 168, 00:36:09.153 "current_admin_qpairs": 0, 00:36:09.153 "current_io_qpairs": 0, 00:36:09.153 "pending_bdev_io": 0, 00:36:09.153 "completed_nvme_io": 212, 00:36:09.153 "transports": [ 00:36:09.153 { 00:36:09.153 "trtype": "TCP" 00:36:09.153 } 00:36:09.153 ] 00:36:09.153 }, 00:36:09.153 { 00:36:09.153 "name": "nvmf_tgt_poll_group_003", 00:36:09.153 "admin_qpairs": 2, 00:36:09.153 "io_qpairs": 168, 00:36:09.153 "current_admin_qpairs": 0, 00:36:09.153 "current_io_qpairs": 0, 00:36:09.153 "pending_bdev_io": 0, 00:36:09.153 "completed_nvme_io": 230, 00:36:09.153 "transports": [ 00:36:09.153 { 00:36:09.153 "trtype": "TCP" 00:36:09.153 } 00:36:09.153 ] 00:36:09.153 } 00:36:09.153 ] 00:36:09.153 }' 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:36:09.153 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:09.412 rmmod nvme_tcp 00:36:09.412 rmmod nvme_fabrics 00:36:09.412 rmmod nvme_keyring 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2066930 ']' 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2066930 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 2066930 ']' 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 2066930 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2066930 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2066930' 00:36:09.412 killing process with pid 2066930 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 2066930 00:36:09.412 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 2066930 00:36:09.671 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:09.671 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:09.671 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:09.671 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:09.671 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:09.671 03:32:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.671 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:09.671 03:32:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.577 03:32:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:11.836 00:36:11.836 real 0m32.389s 00:36:11.836 user 1m37.438s 00:36:11.836 sys 0m6.185s 00:36:11.836 03:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:11.836 03:32:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:36:11.836 ************************************ 00:36:11.836 END TEST nvmf_rpc 00:36:11.836 ************************************ 00:36:11.836 03:32:53 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:36:11.836 03:32:53 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:36:11.836 03:32:53 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:11.836 03:32:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:11.836 ************************************ 00:36:11.836 START TEST nvmf_invalid 00:36:11.836 ************************************ 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:36:11.836 * Looking for test storage... 00:36:11.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.836 03:32:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:36:11.837 03:32:53 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:18.406 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:18.406 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:18.406 Found net devices under 0000:86:00.0: cvl_0_0 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:18.406 Found net devices under 0000:86:00.1: cvl_0_1 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:18.406 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:18.407 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:18.407 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:18.407 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:18.407 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:18.407 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:18.407 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:18.407 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:18.407 03:32:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:18.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:18.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:36:18.407 00:36:18.407 --- 10.0.0.2 ping statistics --- 00:36:18.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:18.407 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:18.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:18.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:36:18.407 00:36:18.407 --- 10.0.0.1 ping statistics --- 00:36:18.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:18.407 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2074822 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2074822 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 2074822 ']' 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:36:18.407 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:36:18.407 [2024-06-11 03:32:59.151838] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:36:18.407 [2024-06-11 03:32:59.151879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:18.407 EAL: No free 2048 kB hugepages reported on node 1 00:36:18.407 [2024-06-11 03:32:59.214088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:18.407 [2024-06-11 03:32:59.255865] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:18.407 [2024-06-11 03:32:59.255904] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:18.407 [2024-06-11 03:32:59.255910] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:18.407 [2024-06-11 03:32:59.255917] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:18.407 [2024-06-11 03:32:59.255921] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:18.407 [2024-06-11 03:32:59.255966] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:36:18.407 [2024-06-11 03:32:59.256068] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:36:18.407 [2024-06-11 03:32:59.256090] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:36:18.407 [2024-06-11 03:32:59.256091] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.673 03:32:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:18.673 03:32:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:36:18.673 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:18.673 03:32:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:18.673 03:32:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:36:18.673 03:32:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:18.673 03:32:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:36:18.673 03:32:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20594 00:36:19.025 [2024-06-11 03:33:00.158634] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:36:19.025 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:36:19.025 { 00:36:19.025 "nqn": "nqn.2016-06.io.spdk:cnode20594", 00:36:19.025 "tgt_name": "foobar", 00:36:19.025 "method": "nvmf_create_subsystem", 00:36:19.025 "req_id": 1 00:36:19.025 } 00:36:19.025 Got JSON-RPC error response 00:36:19.025 response: 00:36:19.025 { 00:36:19.025 "code": -32603, 00:36:19.025 "message": "Unable to find target foobar" 00:36:19.025 }' 00:36:19.025 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:36:19.025 { 00:36:19.025 "nqn": "nqn.2016-06.io.spdk:cnode20594", 00:36:19.025 "tgt_name": "foobar", 00:36:19.025 "method": "nvmf_create_subsystem", 00:36:19.025 "req_id": 1 00:36:19.025 } 00:36:19.025 Got JSON-RPC error response 00:36:19.025 response: 00:36:19.025 { 00:36:19.025 "code": -32603, 00:36:19.025 "message": "Unable to find target foobar" 00:36:19.025 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:36:19.025 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:36:19.025 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12796 00:36:19.025 [2024-06-11 03:33:00.339303] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12796: invalid serial number 'SPDKISFASTANDAWESOME' 00:36:19.025 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:36:19.025 { 00:36:19.025 "nqn": "nqn.2016-06.io.spdk:cnode12796", 00:36:19.025 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:36:19.025 "method": "nvmf_create_subsystem", 00:36:19.025 "req_id": 1 00:36:19.025 } 00:36:19.025 Got JSON-RPC error response 00:36:19.025 response: 00:36:19.025 { 00:36:19.025 "code": -32602, 00:36:19.025 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:36:19.025 }' 00:36:19.025 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:36:19.025 { 00:36:19.025 "nqn": "nqn.2016-06.io.spdk:cnode12796", 00:36:19.025 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:36:19.025 "method": "nvmf_create_subsystem", 00:36:19.025 "req_id": 1 00:36:19.025 } 00:36:19.025 Got JSON-RPC error response 00:36:19.025 response: 00:36:19.025 { 00:36:19.025 "code": -32602, 00:36:19.025 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:36:19.025 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:36:19.025 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:36:19.025 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15794 00:36:19.285 [2024-06-11 03:33:00.531907] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15794: invalid model number 'SPDK_Controller' 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:36:19.285 { 00:36:19.285 "nqn": "nqn.2016-06.io.spdk:cnode15794", 00:36:19.285 "model_number": "SPDK_Controller\u001f", 00:36:19.285 "method": "nvmf_create_subsystem", 00:36:19.285 "req_id": 1 00:36:19.285 } 00:36:19.285 Got JSON-RPC error response 00:36:19.285 response: 00:36:19.285 { 00:36:19.285 "code": -32602, 00:36:19.285 "message": "Invalid MN SPDK_Controller\u001f" 00:36:19.285 }' 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:36:19.285 { 00:36:19.285 "nqn": "nqn.2016-06.io.spdk:cnode15794", 00:36:19.285 "model_number": "SPDK_Controller\u001f", 00:36:19.285 "method": "nvmf_create_subsystem", 00:36:19.285 "req_id": 1 00:36:19.285 } 00:36:19.285 Got JSON-RPC error response 00:36:19.285 response: 00:36:19.285 { 00:36:19.285 "code": -32602, 00:36:19.285 "message": "Invalid MN SPDK_Controller\u001f" 00:36:19.285 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:36:19.285 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.286 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '$iGu4#]&w[Rck8qxxT}"' 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '$iGu4#]&w[Rck8qxxT}"' nqn.2016-06.io.spdk:cnode6907 00:36:19.546 [2024-06-11 03:33:00.848956] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6907: invalid serial number '$iGu4#]&w[Rck8qxxT}"' 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:36:19.546 { 00:36:19.546 "nqn": "nqn.2016-06.io.spdk:cnode6907", 00:36:19.546 "serial_number": "$iGu4#]&w[Rck8q\u007fxxT}\"", 00:36:19.546 "method": "nvmf_create_subsystem", 00:36:19.546 "req_id": 1 00:36:19.546 } 00:36:19.546 Got JSON-RPC error response 00:36:19.546 response: 00:36:19.546 { 00:36:19.546 "code": -32602, 00:36:19.546 "message": "Invalid SN $iGu4#]&w[Rck8q\u007fxxT}\"" 00:36:19.546 }' 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:36:19.546 { 00:36:19.546 "nqn": "nqn.2016-06.io.spdk:cnode6907", 00:36:19.546 "serial_number": "$iGu4#]&w[Rck8q\u007fxxT}\"", 00:36:19.546 "method": "nvmf_create_subsystem", 00:36:19.546 "req_id": 1 00:36:19.546 } 00:36:19.546 Got JSON-RPC error response 00:36:19.546 response: 00:36:19.546 { 00:36:19.546 "code": -32602, 00:36:19.546 "message": "Invalid SN $iGu4#]&w[Rck8q\u007fxxT}\"" 00:36:19.546 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.546 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:36:19.547 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:36:19.807 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:36:19.808 03:33:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'z/op?FLQ 8{h Jm,><0Mt)V<0Mt)V<0Mt)V<0Mt)V<0Mt)V /dev/null' 00:36:21.882 03:33:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.444 03:33:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:24.444 00:36:24.444 real 0m12.186s 00:36:24.444 user 0m19.445s 00:36:24.444 sys 0m5.460s 00:36:24.444 03:33:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:24.444 03:33:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:36:24.444 ************************************ 00:36:24.444 END TEST nvmf_invalid 00:36:24.444 ************************************ 00:36:24.444 03:33:05 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:36:24.444 03:33:05 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:36:24.444 03:33:05 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:24.444 03:33:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:24.444 ************************************ 00:36:24.444 START TEST nvmf_abort 00:36:24.444 ************************************ 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:36:24.444 * Looking for test storage... 00:36:24.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:36:24.444 03:33:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:29.822 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:29.823 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:29.823 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:29.823 Found net devices under 0000:86:00.0: cvl_0_0 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:29.823 Found net devices under 0000:86:00.1: cvl_0_1 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:29.823 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:30.083 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:30.083 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:30.083 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:30.083 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:30.083 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:30.083 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:30.083 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:30.083 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:30.083 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:30.083 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:30.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:30.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:36:30.083 00:36:30.083 --- 10.0.0.2 ping statistics --- 00:36:30.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:30.083 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:36:30.083 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:30.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:30.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:36:30.083 00:36:30.083 --- 10.0.0.1 ping statistics --- 00:36:30.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:30.083 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:36:30.083 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2080003 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2080003 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 2080003 ']' 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:30.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:30.341 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.341 [2024-06-11 03:33:11.568362] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:36:30.341 [2024-06-11 03:33:11.568406] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:30.341 EAL: No free 2048 kB hugepages reported on node 1 00:36:30.341 [2024-06-11 03:33:11.630987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:30.341 [2024-06-11 03:33:11.672120] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:30.341 [2024-06-11 03:33:11.672157] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:30.341 [2024-06-11 03:33:11.672164] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:30.342 [2024-06-11 03:33:11.672169] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:30.342 [2024-06-11 03:33:11.672174] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:30.342 [2024-06-11 03:33:11.672276] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:36:30.342 [2024-06-11 03:33:11.672365] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:36:30.342 [2024-06-11 03:33:11.672366] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.600 [2024-06-11 03:33:11.797399] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.600 Malloc0 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.600 Delay0 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.600 [2024-06-11 03:33:11.873642] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.600 03:33:11 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:30.601 EAL: No free 2048 kB hugepages reported on node 1 00:36:30.859 [2024-06-11 03:33:12.026161] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:32.761 Initializing NVMe Controllers 00:36:32.761 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:32.761 controller IO queue size 128 less than required 00:36:32.761 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:32.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:32.761 Initialization complete. Launching workers. 00:36:32.761 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 45005 00:36:32.761 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 45066, failed to submit 62 00:36:32.761 success 45009, unsuccess 57, failed 0 00:36:32.761 03:33:14 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:32.761 03:33:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:32.761 03:33:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:32.761 03:33:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:32.761 03:33:14 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:32.761 03:33:14 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:32.761 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:32.761 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:36:32.761 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:32.761 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:36:32.761 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:32.762 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:32.762 rmmod nvme_tcp 00:36:32.762 rmmod nvme_fabrics 00:36:32.762 rmmod nvme_keyring 00:36:32.762 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:32.762 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:36:32.762 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:36:32.762 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2080003 ']' 00:36:32.762 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2080003 00:36:32.762 03:33:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 2080003 ']' 00:36:32.762 03:33:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 2080003 00:36:32.762 03:33:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:36:32.762 03:33:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:36:32.762 03:33:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2080003 00:36:33.020 03:33:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:36:33.020 03:33:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:36:33.020 03:33:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2080003' 00:36:33.020 killing process with pid 2080003 00:36:33.020 03:33:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@968 -- # kill 2080003 00:36:33.020 03:33:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@973 -- # wait 2080003 00:36:33.020 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:33.020 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:33.020 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:33.020 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:33.020 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:33.020 03:33:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.020 03:33:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:33.020 03:33:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:35.556 03:33:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:35.556 00:36:35.556 real 0m11.134s 00:36:35.556 user 0m11.287s 00:36:35.556 sys 0m5.525s 00:36:35.556 03:33:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:36:35.556 03:33:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:35.556 ************************************ 00:36:35.556 END TEST nvmf_abort 00:36:35.556 ************************************ 00:36:35.556 03:33:16 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:36:35.556 03:33:16 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:36:35.556 03:33:16 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:36:35.556 03:33:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:35.556 ************************************ 00:36:35.556 START TEST nvmf_ns_hotplug_stress 00:36:35.556 ************************************ 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:36:35.556 * Looking for test storage... 00:36:35.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:35.556 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:36:35.557 03:33:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:42.128 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:42.129 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:42.129 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:42.129 Found net devices under 0000:86:00.0: cvl_0_0 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:42.129 Found net devices under 0000:86:00.1: cvl_0_1 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:42.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:42.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:36:42.129 00:36:42.129 --- 10.0.0.2 ping statistics --- 00:36:42.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:42.129 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:42.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:42.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:36:42.129 00:36:42.129 --- 10.0.0.1 ping statistics --- 00:36:42.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:42.129 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2084302 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2084302 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 2084302 ']' 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:42.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:42.129 [2024-06-11 03:33:22.628929] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:36:42.129 [2024-06-11 03:33:22.628974] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:42.129 EAL: No free 2048 kB hugepages reported on node 1 00:36:42.129 [2024-06-11 03:33:22.691159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:42.129 [2024-06-11 03:33:22.731955] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:42.129 [2024-06-11 03:33:22.731994] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:42.129 [2024-06-11 03:33:22.732001] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:42.129 [2024-06-11 03:33:22.732007] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:42.129 [2024-06-11 03:33:22.732017] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:42.129 [2024-06-11 03:33:22.732116] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:36:42.129 [2024-06-11 03:33:22.732205] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:36:42.129 [2024-06-11 03:33:22.732206] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:42.129 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:42.130 03:33:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:42.130 [2024-06-11 03:33:23.004939] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:42.130 03:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:42.130 03:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:42.130 [2024-06-11 03:33:23.350217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:42.130 03:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:42.389 03:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:42.389 Malloc0 00:36:42.389 03:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:42.648 Delay0 00:36:42.648 03:33:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.907 03:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:42.907 NULL1 00:36:43.166 03:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:43.166 03:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:43.166 03:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2084571 00:36:43.166 03:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:43.166 03:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.166 EAL: No free 2048 kB hugepages reported on node 1 00:36:43.425 03:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.683 03:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:43.683 03:33:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:43.683 true 00:36:43.683 03:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:43.683 03:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.942 03:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.201 03:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:44.201 03:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:44.201 true 00:36:44.201 03:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:44.201 03:33:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.579 Read completed with error (sct=0, sc=11) 00:36:45.579 03:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.579 03:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:45.579 03:33:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:45.837 true 00:36:45.837 03:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:45.837 03:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.773 03:33:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.773 03:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:46.773 03:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:47.032 true 00:36:47.032 03:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:47.032 03:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.333 03:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.333 03:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:47.333 03:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:47.613 true 00:36:47.613 03:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:47.613 03:33:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.872 03:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.872 03:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:47.872 03:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:48.131 true 00:36:48.131 03:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:48.131 03:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.391 03:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.391 03:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:48.391 03:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:48.650 true 00:36:48.650 03:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:48.650 03:33:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.909 03:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.909 03:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:48.909 03:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:49.168 true 00:36:49.168 03:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:49.168 03:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:49.427 03:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:49.686 03:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:49.686 03:33:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:49.686 true 00:36:49.686 03:33:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:49.686 03:33:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.066 03:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:51.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:51.066 03:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:51.066 03:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:51.325 true 00:36:51.325 03:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:51.325 03:33:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:52.261 03:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.261 03:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:52.261 03:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:52.521 true 00:36:52.521 03:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:52.521 03:33:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.780 03:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.780 03:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:52.780 03:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:53.040 true 00:36:53.040 03:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:53.040 03:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.301 03:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.561 03:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:53.561 03:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:53.561 true 00:36:53.561 03:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:53.561 03:33:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.820 03:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:54.079 03:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:54.079 03:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:54.079 true 00:36:54.079 03:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:54.079 03:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.338 03:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:54.597 03:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:54.597 03:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:54.597 true 00:36:54.597 03:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:54.597 03:33:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.856 03:33:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.116 03:33:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:55.116 03:33:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:55.116 true 00:36:55.116 03:33:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:55.116 03:33:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:56.497 03:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:56.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:56.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:56.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:56.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:56.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:56.497 03:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:56.497 03:33:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:56.756 true 00:36:56.756 03:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:56.756 03:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.699 03:33:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:57.699 03:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:57.699 03:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:57.958 true 00:36:57.958 03:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:57.958 03:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.216 03:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:58.475 03:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:58.475 03:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:58.475 true 00:36:58.475 03:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:36:58.475 03:33:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:59.853 03:33:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:59.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:59.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:59.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:59.853 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:59.853 03:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:59.853 03:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:37:00.112 true 00:37:00.112 03:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:37:00.112 03:33:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.048 03:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:01.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.048 03:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:37:01.048 03:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:37:01.306 true 00:37:01.306 03:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:37:01.306 03:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.564 03:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:01.564 03:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:37:01.564 03:33:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:37:01.822 true 00:37:01.822 03:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:37:01.822 03:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.080 03:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.080 03:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:37:02.080 03:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:37:02.338 true 00:37:02.338 03:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:37:02.338 03:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.597 03:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.597 03:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:37:02.597 03:33:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:37:02.855 true 00:37:02.855 03:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:37:02.855 03:33:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.233 03:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:04.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.233 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:04.233 03:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:04.233 03:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:04.522 true 00:37:04.522 03:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:37:04.522 03:33:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:05.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.459 03:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:05.459 03:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:05.459 03:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:05.718 true 00:37:05.718 03:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:37:05.718 03:33:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:05.718 03:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:05.977 03:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:05.977 03:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:06.236 true 00:37:06.236 03:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:37:06.236 03:33:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:07.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:07.615 03:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:07.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:07.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:07.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:07.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:07.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:07.615 03:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:37:07.615 03:33:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:37:07.615 true 00:37:07.874 03:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:37:07.874 03:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:08.811 03:33:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:08.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:08.811 03:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:37:08.811 03:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:37:09.069 true 00:37:09.069 03:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:37:09.069 03:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.069 03:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:09.329 03:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:37:09.329 03:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:37:09.588 true 00:37:09.588 03:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:37:09.588 03:33:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:10.523 03:33:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:10.524 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:10.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:10.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:10.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:10.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:10.782 03:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:37:10.782 03:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:37:11.040 true 00:37:11.040 03:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:37:11.040 03:33:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:11.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:11.975 03:33:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:11.975 03:33:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:37:11.975 03:33:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:37:12.233 true 00:37:12.233 03:33:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:37:12.233 03:33:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:12.491 03:33:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:12.491 03:33:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:37:12.491 03:33:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:37:12.749 true 00:37:12.749 03:33:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:37:12.749 03:33:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.124 Initializing NVMe Controllers 00:37:14.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:14.124 Controller IO queue size 128, less than required. 00:37:14.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:14.124 Controller IO queue size 128, less than required. 00:37:14.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:14.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:14.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:14.124 Initialization complete. Launching workers. 00:37:14.124 ======================================================== 00:37:14.124 Latency(us) 00:37:14.124 Device Information : IOPS MiB/s Average min max 00:37:14.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1533.83 0.75 47293.19 2245.35 1011485.49 00:37:14.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14946.03 7.30 8544.65 2391.54 372432.75 00:37:14.124 ======================================================== 00:37:14.124 Total : 16479.87 8.05 12151.10 2245.35 1011485.49 00:37:14.124 00:37:14.124 03:33:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:14.124 03:33:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:37:14.124 03:33:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:37:14.124 true 00:37:14.382 03:33:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2084571 00:37:14.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2084571) - No such process 00:37:14.383 03:33:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2084571 00:37:14.383 03:33:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.383 03:33:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:14.641 03:33:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:14.641 03:33:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:14.641 03:33:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:14.641 03:33:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:14.641 03:33:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:14.901 null0 00:37:14.901 03:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:14.901 03:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:14.901 03:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:14.901 null1 00:37:14.901 03:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:14.901 03:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:14.901 03:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:15.161 null2 00:37:15.161 03:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:15.161 03:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:15.161 03:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:15.420 null3 00:37:15.420 03:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:15.420 03:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:15.420 03:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:15.420 null4 00:37:15.420 03:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:15.420 03:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:15.420 03:33:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:15.679 null5 00:37:15.679 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:15.679 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:15.679 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:15.938 null6 00:37:15.938 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:15.938 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:15.938 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:16.196 null7 00:37:16.196 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:16.196 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:16.196 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:16.196 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.196 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:16.196 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:16.196 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:16.196 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.196 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:16.196 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:16.196 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.196 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:16.196 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:16.196 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:16.196 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2090165 2090166 2090168 2090171 2090172 2090174 2090176 2090178 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:16.197 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:16.456 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.456 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.456 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:16.456 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.456 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.456 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:16.456 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.456 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.456 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.456 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.456 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:16.457 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:16.457 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.457 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.457 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:16.457 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.457 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.457 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:16.457 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.457 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.457 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:16.457 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.457 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.457 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:16.715 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:16.715 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:16.715 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:16.715 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:16.715 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:16.715 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:16.715 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:16.715 03:33:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:16.973 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.973 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:16.974 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.233 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:17.492 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:17.492 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:17.492 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.492 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:17.492 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:17.492 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:17.492 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:17.492 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:17.492 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.492 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.492 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:17.750 03:33:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:17.750 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.750 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:17.750 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:17.750 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:17.750 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:17.750 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:17.751 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:17.751 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.009 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.268 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:18.526 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:18.526 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:18.526 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:18.526 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:18.526 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:18.526 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:18.526 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:18.526 03:33:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:18.784 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:18.785 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:18.785 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.044 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:19.303 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:19.303 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:19.303 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:19.303 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:19.303 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:19.303 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:19.303 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:19.303 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:19.562 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:19.821 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:19.821 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:19.821 03:34:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:19.821 rmmod nvme_tcp 00:37:19.821 rmmod nvme_fabrics 00:37:19.821 rmmod nvme_keyring 00:37:19.821 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2084302 ']' 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2084302 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 2084302 ']' 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 2084302 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2084302 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2084302' 00:37:20.080 killing process with pid 2084302 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 2084302 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 2084302 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:20.080 03:34:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.614 03:34:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:22.614 00:37:22.614 real 0m47.005s 00:37:22.614 user 3m12.110s 00:37:22.614 sys 0m15.743s 00:37:22.614 03:34:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:22.614 03:34:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:22.614 ************************************ 00:37:22.614 END TEST nvmf_ns_hotplug_stress 00:37:22.614 ************************************ 00:37:22.614 03:34:03 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:37:22.614 03:34:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:37:22.614 03:34:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:22.614 03:34:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:22.614 ************************************ 00:37:22.614 START TEST nvmf_connect_stress 00:37:22.614 ************************************ 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:37:22.614 * Looking for test storage... 00:37:22.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:22.614 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:37:22.615 03:34:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:27.929 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:27.929 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:27.929 Found net devices under 0000:86:00.0: cvl_0_0 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:27.929 Found net devices under 0000:86:00.1: cvl_0_1 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:27.929 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:27.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:27.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:37:27.929 00:37:27.930 --- 10.0.0.2 ping statistics --- 00:37:27.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:27.930 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:37:27.930 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:27.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:27.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:37:27.930 00:37:27.930 --- 10.0.0.1 ping statistics --- 00:37:27.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:27.930 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:37:27.930 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:27.930 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:37:27.930 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:27.930 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:27.930 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:27.930 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:27.930 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:27.930 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:28.188 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:28.188 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:37:28.188 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:28.188 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:28.188 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:28.188 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2094617 00:37:28.188 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2094617 00:37:28.188 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:28.188 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 2094617 ']' 00:37:28.188 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:28.188 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:28.188 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:28.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:28.188 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:28.188 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:28.188 [2024-06-11 03:34:09.417296] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:37:28.188 [2024-06-11 03:34:09.417341] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:28.188 EAL: No free 2048 kB hugepages reported on node 1 00:37:28.188 [2024-06-11 03:34:09.480802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:28.188 [2024-06-11 03:34:09.521281] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:28.188 [2024-06-11 03:34:09.521322] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:28.188 [2024-06-11 03:34:09.521329] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:28.188 [2024-06-11 03:34:09.521335] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:28.188 [2024-06-11 03:34:09.521341] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:28.188 [2024-06-11 03:34:09.525029] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:37:28.188 [2024-06-11 03:34:09.525098] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:37:28.188 [2024-06-11 03:34:09.525101] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:28.447 [2024-06-11 03:34:09.662099] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:28.447 [2024-06-11 03:34:09.694121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:28.447 NULL1 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2094841 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.447 EAL: No free 2048 kB hugepages reported on node 1 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.447 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:28.448 03:34:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:29.015 03:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:29.015 03:34:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:29.015 03:34:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:29.015 03:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:29.015 03:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:29.274 03:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:29.274 03:34:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:29.274 03:34:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:29.274 03:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:29.274 03:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:29.532 03:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:29.532 03:34:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:29.532 03:34:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:29.532 03:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:29.532 03:34:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:29.791 03:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:29.792 03:34:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:29.792 03:34:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:29.792 03:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:29.792 03:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:30.050 03:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:30.050 03:34:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:30.050 03:34:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:30.050 03:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:30.050 03:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:30.617 03:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:30.617 03:34:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:30.617 03:34:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:30.617 03:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:30.617 03:34:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:30.875 03:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:30.875 03:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:30.875 03:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:30.875 03:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:30.875 03:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:31.133 03:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:31.133 03:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:31.133 03:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:31.133 03:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:31.133 03:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:31.390 03:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:31.390 03:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:31.390 03:34:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:31.390 03:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:31.390 03:34:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:31.648 03:34:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:31.648 03:34:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:31.648 03:34:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:31.648 03:34:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:31.648 03:34:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:32.213 03:34:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:32.213 03:34:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:32.213 03:34:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:32.213 03:34:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:32.213 03:34:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:32.471 03:34:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:32.471 03:34:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:32.471 03:34:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:32.471 03:34:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:32.471 03:34:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:32.730 03:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:32.730 03:34:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:32.730 03:34:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:32.730 03:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:32.730 03:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:32.988 03:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:32.988 03:34:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:32.988 03:34:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:32.988 03:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:32.988 03:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:33.246 03:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:33.246 03:34:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:33.246 03:34:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:33.246 03:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:33.246 03:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:33.813 03:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:33.813 03:34:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:33.813 03:34:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:33.813 03:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:33.814 03:34:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:34.072 03:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:34.072 03:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:34.072 03:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:34.072 03:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:34.072 03:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:34.331 03:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:34.331 03:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:34.331 03:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:34.331 03:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:34.331 03:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:34.589 03:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:34.589 03:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:34.589 03:34:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:34.589 03:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:34.589 03:34:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:35.157 03:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.157 03:34:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:35.157 03:34:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:35.157 03:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.157 03:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:35.415 03:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.415 03:34:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:35.415 03:34:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:35.415 03:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.415 03:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:35.674 03:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.674 03:34:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:35.674 03:34:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:35.674 03:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.674 03:34:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:35.933 03:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.933 03:34:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:35.933 03:34:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:35.933 03:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.933 03:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:36.191 03:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:36.191 03:34:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:36.191 03:34:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:36.191 03:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:36.191 03:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:36.758 03:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:36.758 03:34:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:36.758 03:34:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:36.758 03:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:36.758 03:34:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:37.017 03:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:37.017 03:34:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:37.017 03:34:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:37.017 03:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:37.017 03:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:37.275 03:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:37.275 03:34:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:37.275 03:34:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:37.275 03:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:37.275 03:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:37.534 03:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:37.534 03:34:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:37.534 03:34:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:37.534 03:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:37.534 03:34:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:38.102 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:38.102 03:34:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:38.102 03:34:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:38.102 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:38.102 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:38.361 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:38.361 03:34:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:38.361 03:34:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:37:38.361 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:38.361 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:38.620 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2094841 00:37:38.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2094841) - No such process 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2094841 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:38.620 rmmod nvme_tcp 00:37:38.620 rmmod nvme_fabrics 00:37:38.620 rmmod nvme_keyring 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2094617 ']' 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2094617 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 2094617 ']' 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 2094617 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2094617 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2094617' 00:37:38.620 killing process with pid 2094617 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 2094617 00:37:38.620 03:34:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 2094617 00:37:38.880 03:34:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:38.880 03:34:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:38.880 03:34:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:38.880 03:34:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:38.880 03:34:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:38.880 03:34:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:38.880 03:34:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:38.880 03:34:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:41.415 03:34:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:41.415 00:37:41.415 real 0m18.628s 00:37:41.415 user 0m39.020s 00:37:41.415 sys 0m8.218s 00:37:41.415 03:34:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:41.415 03:34:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:37:41.415 ************************************ 00:37:41.415 END TEST nvmf_connect_stress 00:37:41.415 ************************************ 00:37:41.415 03:34:22 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:37:41.415 03:34:22 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:37:41.415 03:34:22 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:41.415 03:34:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:41.415 ************************************ 00:37:41.415 START TEST nvmf_fused_ordering 00:37:41.415 ************************************ 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:37:41.415 * Looking for test storage... 00:37:41.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:37:41.415 03:34:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:48.012 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:48.012 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:48.012 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:48.013 Found net devices under 0000:86:00.0: cvl_0_0 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:48.013 Found net devices under 0000:86:00.1: cvl_0_1 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:48.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:48.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:37:48.013 00:37:48.013 --- 10.0.0.2 ping statistics --- 00:37:48.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:48.013 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:48.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:48.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:37:48.013 00:37:48.013 --- 10.0.0.1 ping statistics --- 00:37:48.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:48.013 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2100281 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2100281 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 2100281 ']' 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:48.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:37:48.013 [2024-06-11 03:34:28.576671] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:37:48.013 [2024-06-11 03:34:28.576715] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:48.013 EAL: No free 2048 kB hugepages reported on node 1 00:37:48.013 [2024-06-11 03:34:28.636022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.013 [2024-06-11 03:34:28.675561] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:48.013 [2024-06-11 03:34:28.675597] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:48.013 [2024-06-11 03:34:28.675604] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:48.013 [2024-06-11 03:34:28.675611] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:48.013 [2024-06-11 03:34:28.675616] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:48.013 [2024-06-11 03:34:28.675651] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:37:48.013 [2024-06-11 03:34:28.799226] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:37:48.013 [2024-06-11 03:34:28.815372] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:48.013 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:37:48.013 NULL1 00:37:48.014 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:48.014 03:34:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:37:48.014 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:48.014 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:37:48.014 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:48.014 03:34:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:37:48.014 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:48.014 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:37:48.014 03:34:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:48.014 03:34:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:48.014 [2024-06-11 03:34:28.866310] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:37:48.014 [2024-06-11 03:34:28.866345] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2100306 ] 00:37:48.014 EAL: No free 2048 kB hugepages reported on node 1 00:37:48.014 Attached to nqn.2016-06.io.spdk:cnode1 00:37:48.014 Namespace ID: 1 size: 1GB 00:37:48.014 fused_ordering(0) 00:37:48.014 fused_ordering(1) 00:37:48.014 fused_ordering(2) 00:37:48.014 fused_ordering(3) 00:37:48.014 fused_ordering(4) 00:37:48.014 fused_ordering(5) 00:37:48.014 fused_ordering(6) 00:37:48.014 fused_ordering(7) 00:37:48.014 fused_ordering(8) 00:37:48.014 fused_ordering(9) 00:37:48.014 fused_ordering(10) 00:37:48.014 fused_ordering(11) 00:37:48.014 fused_ordering(12) 00:37:48.014 fused_ordering(13) 00:37:48.014 fused_ordering(14) 00:37:48.014 fused_ordering(15) 00:37:48.014 fused_ordering(16) 00:37:48.014 fused_ordering(17) 00:37:48.014 fused_ordering(18) 00:37:48.014 fused_ordering(19) 00:37:48.014 fused_ordering(20) 00:37:48.014 fused_ordering(21) 00:37:48.014 fused_ordering(22) 00:37:48.014 fused_ordering(23) 00:37:48.014 fused_ordering(24) 00:37:48.014 fused_ordering(25) 00:37:48.014 fused_ordering(26) 00:37:48.014 fused_ordering(27) 00:37:48.014 fused_ordering(28) 00:37:48.014 fused_ordering(29) 00:37:48.014 fused_ordering(30) 00:37:48.014 fused_ordering(31) 00:37:48.014 fused_ordering(32) 00:37:48.014 fused_ordering(33) 00:37:48.014 fused_ordering(34) 00:37:48.014 fused_ordering(35) 00:37:48.014 fused_ordering(36) 00:37:48.014 fused_ordering(37) 00:37:48.014 fused_ordering(38) 00:37:48.014 fused_ordering(39) 00:37:48.014 fused_ordering(40) 00:37:48.014 fused_ordering(41) 00:37:48.014 fused_ordering(42) 00:37:48.014 fused_ordering(43) 00:37:48.014 fused_ordering(44) 00:37:48.014 fused_ordering(45) 00:37:48.014 fused_ordering(46) 00:37:48.014 fused_ordering(47) 00:37:48.014 fused_ordering(48) 00:37:48.014 fused_ordering(49) 00:37:48.014 fused_ordering(50) 00:37:48.014 fused_ordering(51) 00:37:48.014 fused_ordering(52) 00:37:48.014 fused_ordering(53) 00:37:48.014 fused_ordering(54) 00:37:48.014 fused_ordering(55) 00:37:48.014 fused_ordering(56) 00:37:48.014 fused_ordering(57) 00:37:48.014 fused_ordering(58) 00:37:48.014 fused_ordering(59) 00:37:48.014 fused_ordering(60) 00:37:48.014 fused_ordering(61) 00:37:48.014 fused_ordering(62) 00:37:48.014 fused_ordering(63) 00:37:48.014 fused_ordering(64) 00:37:48.014 fused_ordering(65) 00:37:48.014 fused_ordering(66) 00:37:48.014 fused_ordering(67) 00:37:48.014 fused_ordering(68) 00:37:48.014 fused_ordering(69) 00:37:48.014 fused_ordering(70) 00:37:48.014 fused_ordering(71) 00:37:48.014 fused_ordering(72) 00:37:48.014 fused_ordering(73) 00:37:48.014 fused_ordering(74) 00:37:48.014 fused_ordering(75) 00:37:48.014 fused_ordering(76) 00:37:48.014 fused_ordering(77) 00:37:48.014 fused_ordering(78) 00:37:48.014 fused_ordering(79) 00:37:48.014 fused_ordering(80) 00:37:48.014 fused_ordering(81) 00:37:48.014 fused_ordering(82) 00:37:48.014 fused_ordering(83) 00:37:48.014 fused_ordering(84) 00:37:48.014 fused_ordering(85) 00:37:48.014 fused_ordering(86) 00:37:48.014 fused_ordering(87) 00:37:48.014 fused_ordering(88) 00:37:48.014 fused_ordering(89) 00:37:48.014 fused_ordering(90) 00:37:48.014 fused_ordering(91) 00:37:48.014 fused_ordering(92) 00:37:48.014 fused_ordering(93) 00:37:48.014 fused_ordering(94) 00:37:48.014 fused_ordering(95) 00:37:48.014 fused_ordering(96) 00:37:48.014 fused_ordering(97) 00:37:48.014 fused_ordering(98) 00:37:48.014 fused_ordering(99) 00:37:48.014 fused_ordering(100) 00:37:48.014 fused_ordering(101) 00:37:48.014 fused_ordering(102) 00:37:48.014 fused_ordering(103) 00:37:48.014 fused_ordering(104) 00:37:48.014 fused_ordering(105) 00:37:48.014 fused_ordering(106) 00:37:48.014 fused_ordering(107) 00:37:48.014 fused_ordering(108) 00:37:48.014 fused_ordering(109) 00:37:48.014 fused_ordering(110) 00:37:48.014 fused_ordering(111) 00:37:48.014 fused_ordering(112) 00:37:48.014 fused_ordering(113) 00:37:48.014 fused_ordering(114) 00:37:48.014 fused_ordering(115) 00:37:48.014 fused_ordering(116) 00:37:48.014 fused_ordering(117) 00:37:48.014 fused_ordering(118) 00:37:48.014 fused_ordering(119) 00:37:48.014 fused_ordering(120) 00:37:48.014 fused_ordering(121) 00:37:48.014 fused_ordering(122) 00:37:48.014 fused_ordering(123) 00:37:48.014 fused_ordering(124) 00:37:48.014 fused_ordering(125) 00:37:48.014 fused_ordering(126) 00:37:48.014 fused_ordering(127) 00:37:48.014 fused_ordering(128) 00:37:48.014 fused_ordering(129) 00:37:48.014 fused_ordering(130) 00:37:48.014 fused_ordering(131) 00:37:48.014 fused_ordering(132) 00:37:48.014 fused_ordering(133) 00:37:48.014 fused_ordering(134) 00:37:48.014 fused_ordering(135) 00:37:48.014 fused_ordering(136) 00:37:48.014 fused_ordering(137) 00:37:48.014 fused_ordering(138) 00:37:48.014 fused_ordering(139) 00:37:48.014 fused_ordering(140) 00:37:48.014 fused_ordering(141) 00:37:48.014 fused_ordering(142) 00:37:48.014 fused_ordering(143) 00:37:48.014 fused_ordering(144) 00:37:48.014 fused_ordering(145) 00:37:48.014 fused_ordering(146) 00:37:48.014 fused_ordering(147) 00:37:48.014 fused_ordering(148) 00:37:48.014 fused_ordering(149) 00:37:48.014 fused_ordering(150) 00:37:48.014 fused_ordering(151) 00:37:48.014 fused_ordering(152) 00:37:48.014 fused_ordering(153) 00:37:48.014 fused_ordering(154) 00:37:48.014 fused_ordering(155) 00:37:48.014 fused_ordering(156) 00:37:48.014 fused_ordering(157) 00:37:48.014 fused_ordering(158) 00:37:48.014 fused_ordering(159) 00:37:48.014 fused_ordering(160) 00:37:48.014 fused_ordering(161) 00:37:48.014 fused_ordering(162) 00:37:48.014 fused_ordering(163) 00:37:48.014 fused_ordering(164) 00:37:48.014 fused_ordering(165) 00:37:48.014 fused_ordering(166) 00:37:48.014 fused_ordering(167) 00:37:48.014 fused_ordering(168) 00:37:48.014 fused_ordering(169) 00:37:48.014 fused_ordering(170) 00:37:48.014 fused_ordering(171) 00:37:48.014 fused_ordering(172) 00:37:48.014 fused_ordering(173) 00:37:48.014 fused_ordering(174) 00:37:48.014 fused_ordering(175) 00:37:48.014 fused_ordering(176) 00:37:48.014 fused_ordering(177) 00:37:48.014 fused_ordering(178) 00:37:48.014 fused_ordering(179) 00:37:48.014 fused_ordering(180) 00:37:48.014 fused_ordering(181) 00:37:48.014 fused_ordering(182) 00:37:48.014 fused_ordering(183) 00:37:48.014 fused_ordering(184) 00:37:48.014 fused_ordering(185) 00:37:48.014 fused_ordering(186) 00:37:48.014 fused_ordering(187) 00:37:48.014 fused_ordering(188) 00:37:48.014 fused_ordering(189) 00:37:48.014 fused_ordering(190) 00:37:48.014 fused_ordering(191) 00:37:48.014 fused_ordering(192) 00:37:48.014 fused_ordering(193) 00:37:48.014 fused_ordering(194) 00:37:48.014 fused_ordering(195) 00:37:48.014 fused_ordering(196) 00:37:48.014 fused_ordering(197) 00:37:48.014 fused_ordering(198) 00:37:48.014 fused_ordering(199) 00:37:48.014 fused_ordering(200) 00:37:48.014 fused_ordering(201) 00:37:48.014 fused_ordering(202) 00:37:48.014 fused_ordering(203) 00:37:48.014 fused_ordering(204) 00:37:48.014 fused_ordering(205) 00:37:48.274 fused_ordering(206) 00:37:48.274 fused_ordering(207) 00:37:48.274 fused_ordering(208) 00:37:48.274 fused_ordering(209) 00:37:48.274 fused_ordering(210) 00:37:48.274 fused_ordering(211) 00:37:48.274 fused_ordering(212) 00:37:48.274 fused_ordering(213) 00:37:48.274 fused_ordering(214) 00:37:48.274 fused_ordering(215) 00:37:48.274 fused_ordering(216) 00:37:48.274 fused_ordering(217) 00:37:48.274 fused_ordering(218) 00:37:48.274 fused_ordering(219) 00:37:48.274 fused_ordering(220) 00:37:48.274 fused_ordering(221) 00:37:48.274 fused_ordering(222) 00:37:48.274 fused_ordering(223) 00:37:48.274 fused_ordering(224) 00:37:48.274 fused_ordering(225) 00:37:48.274 fused_ordering(226) 00:37:48.274 fused_ordering(227) 00:37:48.274 fused_ordering(228) 00:37:48.274 fused_ordering(229) 00:37:48.274 fused_ordering(230) 00:37:48.274 fused_ordering(231) 00:37:48.274 fused_ordering(232) 00:37:48.274 fused_ordering(233) 00:37:48.274 fused_ordering(234) 00:37:48.274 fused_ordering(235) 00:37:48.274 fused_ordering(236) 00:37:48.274 fused_ordering(237) 00:37:48.274 fused_ordering(238) 00:37:48.274 fused_ordering(239) 00:37:48.274 fused_ordering(240) 00:37:48.274 fused_ordering(241) 00:37:48.274 fused_ordering(242) 00:37:48.274 fused_ordering(243) 00:37:48.274 fused_ordering(244) 00:37:48.274 fused_ordering(245) 00:37:48.274 fused_ordering(246) 00:37:48.274 fused_ordering(247) 00:37:48.274 fused_ordering(248) 00:37:48.274 fused_ordering(249) 00:37:48.274 fused_ordering(250) 00:37:48.274 fused_ordering(251) 00:37:48.274 fused_ordering(252) 00:37:48.274 fused_ordering(253) 00:37:48.274 fused_ordering(254) 00:37:48.274 fused_ordering(255) 00:37:48.274 fused_ordering(256) 00:37:48.274 fused_ordering(257) 00:37:48.274 fused_ordering(258) 00:37:48.274 fused_ordering(259) 00:37:48.274 fused_ordering(260) 00:37:48.274 fused_ordering(261) 00:37:48.274 fused_ordering(262) 00:37:48.274 fused_ordering(263) 00:37:48.274 fused_ordering(264) 00:37:48.274 fused_ordering(265) 00:37:48.274 fused_ordering(266) 00:37:48.274 fused_ordering(267) 00:37:48.274 fused_ordering(268) 00:37:48.274 fused_ordering(269) 00:37:48.274 fused_ordering(270) 00:37:48.274 fused_ordering(271) 00:37:48.274 fused_ordering(272) 00:37:48.274 fused_ordering(273) 00:37:48.274 fused_ordering(274) 00:37:48.274 fused_ordering(275) 00:37:48.274 fused_ordering(276) 00:37:48.274 fused_ordering(277) 00:37:48.274 fused_ordering(278) 00:37:48.274 fused_ordering(279) 00:37:48.274 fused_ordering(280) 00:37:48.274 fused_ordering(281) 00:37:48.274 fused_ordering(282) 00:37:48.274 fused_ordering(283) 00:37:48.274 fused_ordering(284) 00:37:48.274 fused_ordering(285) 00:37:48.274 fused_ordering(286) 00:37:48.274 fused_ordering(287) 00:37:48.274 fused_ordering(288) 00:37:48.274 fused_ordering(289) 00:37:48.274 fused_ordering(290) 00:37:48.274 fused_ordering(291) 00:37:48.274 fused_ordering(292) 00:37:48.274 fused_ordering(293) 00:37:48.274 fused_ordering(294) 00:37:48.274 fused_ordering(295) 00:37:48.274 fused_ordering(296) 00:37:48.274 fused_ordering(297) 00:37:48.274 fused_ordering(298) 00:37:48.274 fused_ordering(299) 00:37:48.274 fused_ordering(300) 00:37:48.274 fused_ordering(301) 00:37:48.274 fused_ordering(302) 00:37:48.274 fused_ordering(303) 00:37:48.274 fused_ordering(304) 00:37:48.274 fused_ordering(305) 00:37:48.274 fused_ordering(306) 00:37:48.274 fused_ordering(307) 00:37:48.274 fused_ordering(308) 00:37:48.274 fused_ordering(309) 00:37:48.274 fused_ordering(310) 00:37:48.274 fused_ordering(311) 00:37:48.274 fused_ordering(312) 00:37:48.274 fused_ordering(313) 00:37:48.274 fused_ordering(314) 00:37:48.274 fused_ordering(315) 00:37:48.274 fused_ordering(316) 00:37:48.274 fused_ordering(317) 00:37:48.274 fused_ordering(318) 00:37:48.274 fused_ordering(319) 00:37:48.274 fused_ordering(320) 00:37:48.274 fused_ordering(321) 00:37:48.274 fused_ordering(322) 00:37:48.274 fused_ordering(323) 00:37:48.274 fused_ordering(324) 00:37:48.274 fused_ordering(325) 00:37:48.274 fused_ordering(326) 00:37:48.274 fused_ordering(327) 00:37:48.274 fused_ordering(328) 00:37:48.274 fused_ordering(329) 00:37:48.274 fused_ordering(330) 00:37:48.274 fused_ordering(331) 00:37:48.274 fused_ordering(332) 00:37:48.274 fused_ordering(333) 00:37:48.274 fused_ordering(334) 00:37:48.274 fused_ordering(335) 00:37:48.274 fused_ordering(336) 00:37:48.274 fused_ordering(337) 00:37:48.274 fused_ordering(338) 00:37:48.274 fused_ordering(339) 00:37:48.274 fused_ordering(340) 00:37:48.274 fused_ordering(341) 00:37:48.274 fused_ordering(342) 00:37:48.274 fused_ordering(343) 00:37:48.274 fused_ordering(344) 00:37:48.274 fused_ordering(345) 00:37:48.274 fused_ordering(346) 00:37:48.274 fused_ordering(347) 00:37:48.274 fused_ordering(348) 00:37:48.274 fused_ordering(349) 00:37:48.274 fused_ordering(350) 00:37:48.274 fused_ordering(351) 00:37:48.274 fused_ordering(352) 00:37:48.274 fused_ordering(353) 00:37:48.274 fused_ordering(354) 00:37:48.274 fused_ordering(355) 00:37:48.274 fused_ordering(356) 00:37:48.274 fused_ordering(357) 00:37:48.274 fused_ordering(358) 00:37:48.274 fused_ordering(359) 00:37:48.274 fused_ordering(360) 00:37:48.274 fused_ordering(361) 00:37:48.274 fused_ordering(362) 00:37:48.274 fused_ordering(363) 00:37:48.274 fused_ordering(364) 00:37:48.274 fused_ordering(365) 00:37:48.274 fused_ordering(366) 00:37:48.274 fused_ordering(367) 00:37:48.274 fused_ordering(368) 00:37:48.274 fused_ordering(369) 00:37:48.274 fused_ordering(370) 00:37:48.274 fused_ordering(371) 00:37:48.274 fused_ordering(372) 00:37:48.274 fused_ordering(373) 00:37:48.274 fused_ordering(374) 00:37:48.274 fused_ordering(375) 00:37:48.274 fused_ordering(376) 00:37:48.274 fused_ordering(377) 00:37:48.274 fused_ordering(378) 00:37:48.274 fused_ordering(379) 00:37:48.274 fused_ordering(380) 00:37:48.274 fused_ordering(381) 00:37:48.274 fused_ordering(382) 00:37:48.274 fused_ordering(383) 00:37:48.274 fused_ordering(384) 00:37:48.274 fused_ordering(385) 00:37:48.274 fused_ordering(386) 00:37:48.274 fused_ordering(387) 00:37:48.274 fused_ordering(388) 00:37:48.274 fused_ordering(389) 00:37:48.274 fused_ordering(390) 00:37:48.274 fused_ordering(391) 00:37:48.274 fused_ordering(392) 00:37:48.274 fused_ordering(393) 00:37:48.274 fused_ordering(394) 00:37:48.274 fused_ordering(395) 00:37:48.274 fused_ordering(396) 00:37:48.274 fused_ordering(397) 00:37:48.274 fused_ordering(398) 00:37:48.274 fused_ordering(399) 00:37:48.274 fused_ordering(400) 00:37:48.274 fused_ordering(401) 00:37:48.274 fused_ordering(402) 00:37:48.274 fused_ordering(403) 00:37:48.274 fused_ordering(404) 00:37:48.274 fused_ordering(405) 00:37:48.274 fused_ordering(406) 00:37:48.274 fused_ordering(407) 00:37:48.274 fused_ordering(408) 00:37:48.274 fused_ordering(409) 00:37:48.274 fused_ordering(410) 00:37:48.533 fused_ordering(411) 00:37:48.533 fused_ordering(412) 00:37:48.533 fused_ordering(413) 00:37:48.533 fused_ordering(414) 00:37:48.533 fused_ordering(415) 00:37:48.533 fused_ordering(416) 00:37:48.533 fused_ordering(417) 00:37:48.533 fused_ordering(418) 00:37:48.533 fused_ordering(419) 00:37:48.533 fused_ordering(420) 00:37:48.533 fused_ordering(421) 00:37:48.533 fused_ordering(422) 00:37:48.533 fused_ordering(423) 00:37:48.533 fused_ordering(424) 00:37:48.533 fused_ordering(425) 00:37:48.533 fused_ordering(426) 00:37:48.533 fused_ordering(427) 00:37:48.533 fused_ordering(428) 00:37:48.533 fused_ordering(429) 00:37:48.533 fused_ordering(430) 00:37:48.533 fused_ordering(431) 00:37:48.533 fused_ordering(432) 00:37:48.533 fused_ordering(433) 00:37:48.533 fused_ordering(434) 00:37:48.533 fused_ordering(435) 00:37:48.533 fused_ordering(436) 00:37:48.533 fused_ordering(437) 00:37:48.533 fused_ordering(438) 00:37:48.533 fused_ordering(439) 00:37:48.533 fused_ordering(440) 00:37:48.533 fused_ordering(441) 00:37:48.533 fused_ordering(442) 00:37:48.533 fused_ordering(443) 00:37:48.533 fused_ordering(444) 00:37:48.533 fused_ordering(445) 00:37:48.533 fused_ordering(446) 00:37:48.533 fused_ordering(447) 00:37:48.533 fused_ordering(448) 00:37:48.533 fused_ordering(449) 00:37:48.533 fused_ordering(450) 00:37:48.533 fused_ordering(451) 00:37:48.533 fused_ordering(452) 00:37:48.533 fused_ordering(453) 00:37:48.533 fused_ordering(454) 00:37:48.533 fused_ordering(455) 00:37:48.533 fused_ordering(456) 00:37:48.533 fused_ordering(457) 00:37:48.533 fused_ordering(458) 00:37:48.533 fused_ordering(459) 00:37:48.533 fused_ordering(460) 00:37:48.533 fused_ordering(461) 00:37:48.533 fused_ordering(462) 00:37:48.533 fused_ordering(463) 00:37:48.533 fused_ordering(464) 00:37:48.534 fused_ordering(465) 00:37:48.534 fused_ordering(466) 00:37:48.534 fused_ordering(467) 00:37:48.534 fused_ordering(468) 00:37:48.534 fused_ordering(469) 00:37:48.534 fused_ordering(470) 00:37:48.534 fused_ordering(471) 00:37:48.534 fused_ordering(472) 00:37:48.534 fused_ordering(473) 00:37:48.534 fused_ordering(474) 00:37:48.534 fused_ordering(475) 00:37:48.534 fused_ordering(476) 00:37:48.534 fused_ordering(477) 00:37:48.534 fused_ordering(478) 00:37:48.534 fused_ordering(479) 00:37:48.534 fused_ordering(480) 00:37:48.534 fused_ordering(481) 00:37:48.534 fused_ordering(482) 00:37:48.534 fused_ordering(483) 00:37:48.534 fused_ordering(484) 00:37:48.534 fused_ordering(485) 00:37:48.534 fused_ordering(486) 00:37:48.534 fused_ordering(487) 00:37:48.534 fused_ordering(488) 00:37:48.534 fused_ordering(489) 00:37:48.534 fused_ordering(490) 00:37:48.534 fused_ordering(491) 00:37:48.534 fused_ordering(492) 00:37:48.534 fused_ordering(493) 00:37:48.534 fused_ordering(494) 00:37:48.534 fused_ordering(495) 00:37:48.534 fused_ordering(496) 00:37:48.534 fused_ordering(497) 00:37:48.534 fused_ordering(498) 00:37:48.534 fused_ordering(499) 00:37:48.534 fused_ordering(500) 00:37:48.534 fused_ordering(501) 00:37:48.534 fused_ordering(502) 00:37:48.534 fused_ordering(503) 00:37:48.534 fused_ordering(504) 00:37:48.534 fused_ordering(505) 00:37:48.534 fused_ordering(506) 00:37:48.534 fused_ordering(507) 00:37:48.534 fused_ordering(508) 00:37:48.534 fused_ordering(509) 00:37:48.534 fused_ordering(510) 00:37:48.534 fused_ordering(511) 00:37:48.534 fused_ordering(512) 00:37:48.534 fused_ordering(513) 00:37:48.534 fused_ordering(514) 00:37:48.534 fused_ordering(515) 00:37:48.534 fused_ordering(516) 00:37:48.534 fused_ordering(517) 00:37:48.534 fused_ordering(518) 00:37:48.534 fused_ordering(519) 00:37:48.534 fused_ordering(520) 00:37:48.534 fused_ordering(521) 00:37:48.534 fused_ordering(522) 00:37:48.534 fused_ordering(523) 00:37:48.534 fused_ordering(524) 00:37:48.534 fused_ordering(525) 00:37:48.534 fused_ordering(526) 00:37:48.534 fused_ordering(527) 00:37:48.534 fused_ordering(528) 00:37:48.534 fused_ordering(529) 00:37:48.534 fused_ordering(530) 00:37:48.534 fused_ordering(531) 00:37:48.534 fused_ordering(532) 00:37:48.534 fused_ordering(533) 00:37:48.534 fused_ordering(534) 00:37:48.534 fused_ordering(535) 00:37:48.534 fused_ordering(536) 00:37:48.534 fused_ordering(537) 00:37:48.534 fused_ordering(538) 00:37:48.534 fused_ordering(539) 00:37:48.534 fused_ordering(540) 00:37:48.534 fused_ordering(541) 00:37:48.534 fused_ordering(542) 00:37:48.534 fused_ordering(543) 00:37:48.534 fused_ordering(544) 00:37:48.534 fused_ordering(545) 00:37:48.534 fused_ordering(546) 00:37:48.534 fused_ordering(547) 00:37:48.534 fused_ordering(548) 00:37:48.534 fused_ordering(549) 00:37:48.534 fused_ordering(550) 00:37:48.534 fused_ordering(551) 00:37:48.534 fused_ordering(552) 00:37:48.534 fused_ordering(553) 00:37:48.534 fused_ordering(554) 00:37:48.534 fused_ordering(555) 00:37:48.534 fused_ordering(556) 00:37:48.534 fused_ordering(557) 00:37:48.534 fused_ordering(558) 00:37:48.534 fused_ordering(559) 00:37:48.534 fused_ordering(560) 00:37:48.534 fused_ordering(561) 00:37:48.534 fused_ordering(562) 00:37:48.534 fused_ordering(563) 00:37:48.534 fused_ordering(564) 00:37:48.534 fused_ordering(565) 00:37:48.534 fused_ordering(566) 00:37:48.534 fused_ordering(567) 00:37:48.534 fused_ordering(568) 00:37:48.534 fused_ordering(569) 00:37:48.534 fused_ordering(570) 00:37:48.534 fused_ordering(571) 00:37:48.534 fused_ordering(572) 00:37:48.534 fused_ordering(573) 00:37:48.534 fused_ordering(574) 00:37:48.534 fused_ordering(575) 00:37:48.534 fused_ordering(576) 00:37:48.534 fused_ordering(577) 00:37:48.534 fused_ordering(578) 00:37:48.534 fused_ordering(579) 00:37:48.534 fused_ordering(580) 00:37:48.534 fused_ordering(581) 00:37:48.534 fused_ordering(582) 00:37:48.534 fused_ordering(583) 00:37:48.534 fused_ordering(584) 00:37:48.534 fused_ordering(585) 00:37:48.534 fused_ordering(586) 00:37:48.534 fused_ordering(587) 00:37:48.534 fused_ordering(588) 00:37:48.534 fused_ordering(589) 00:37:48.534 fused_ordering(590) 00:37:48.534 fused_ordering(591) 00:37:48.534 fused_ordering(592) 00:37:48.534 fused_ordering(593) 00:37:48.534 fused_ordering(594) 00:37:48.534 fused_ordering(595) 00:37:48.534 fused_ordering(596) 00:37:48.534 fused_ordering(597) 00:37:48.534 fused_ordering(598) 00:37:48.534 fused_ordering(599) 00:37:48.534 fused_ordering(600) 00:37:48.534 fused_ordering(601) 00:37:48.534 fused_ordering(602) 00:37:48.534 fused_ordering(603) 00:37:48.534 fused_ordering(604) 00:37:48.534 fused_ordering(605) 00:37:48.534 fused_ordering(606) 00:37:48.534 fused_ordering(607) 00:37:48.534 fused_ordering(608) 00:37:48.534 fused_ordering(609) 00:37:48.534 fused_ordering(610) 00:37:48.534 fused_ordering(611) 00:37:48.534 fused_ordering(612) 00:37:48.534 fused_ordering(613) 00:37:48.534 fused_ordering(614) 00:37:48.534 fused_ordering(615) 00:37:49.101 fused_ordering(616) 00:37:49.101 fused_ordering(617) 00:37:49.102 fused_ordering(618) 00:37:49.102 fused_ordering(619) 00:37:49.102 fused_ordering(620) 00:37:49.102 fused_ordering(621) 00:37:49.102 fused_ordering(622) 00:37:49.102 fused_ordering(623) 00:37:49.102 fused_ordering(624) 00:37:49.102 fused_ordering(625) 00:37:49.102 fused_ordering(626) 00:37:49.102 fused_ordering(627) 00:37:49.102 fused_ordering(628) 00:37:49.102 fused_ordering(629) 00:37:49.102 fused_ordering(630) 00:37:49.102 fused_ordering(631) 00:37:49.102 fused_ordering(632) 00:37:49.102 fused_ordering(633) 00:37:49.102 fused_ordering(634) 00:37:49.102 fused_ordering(635) 00:37:49.102 fused_ordering(636) 00:37:49.102 fused_ordering(637) 00:37:49.102 fused_ordering(638) 00:37:49.102 fused_ordering(639) 00:37:49.102 fused_ordering(640) 00:37:49.102 fused_ordering(641) 00:37:49.102 fused_ordering(642) 00:37:49.102 fused_ordering(643) 00:37:49.102 fused_ordering(644) 00:37:49.102 fused_ordering(645) 00:37:49.102 fused_ordering(646) 00:37:49.102 fused_ordering(647) 00:37:49.102 fused_ordering(648) 00:37:49.102 fused_ordering(649) 00:37:49.102 fused_ordering(650) 00:37:49.102 fused_ordering(651) 00:37:49.102 fused_ordering(652) 00:37:49.102 fused_ordering(653) 00:37:49.102 fused_ordering(654) 00:37:49.102 fused_ordering(655) 00:37:49.102 fused_ordering(656) 00:37:49.102 fused_ordering(657) 00:37:49.102 fused_ordering(658) 00:37:49.102 fused_ordering(659) 00:37:49.102 fused_ordering(660) 00:37:49.102 fused_ordering(661) 00:37:49.102 fused_ordering(662) 00:37:49.102 fused_ordering(663) 00:37:49.102 fused_ordering(664) 00:37:49.102 fused_ordering(665) 00:37:49.102 fused_ordering(666) 00:37:49.102 fused_ordering(667) 00:37:49.102 fused_ordering(668) 00:37:49.102 fused_ordering(669) 00:37:49.102 fused_ordering(670) 00:37:49.102 fused_ordering(671) 00:37:49.102 fused_ordering(672) 00:37:49.102 fused_ordering(673) 00:37:49.102 fused_ordering(674) 00:37:49.102 fused_ordering(675) 00:37:49.102 fused_ordering(676) 00:37:49.102 fused_ordering(677) 00:37:49.102 fused_ordering(678) 00:37:49.102 fused_ordering(679) 00:37:49.102 fused_ordering(680) 00:37:49.102 fused_ordering(681) 00:37:49.102 fused_ordering(682) 00:37:49.102 fused_ordering(683) 00:37:49.102 fused_ordering(684) 00:37:49.102 fused_ordering(685) 00:37:49.102 fused_ordering(686) 00:37:49.102 fused_ordering(687) 00:37:49.102 fused_ordering(688) 00:37:49.102 fused_ordering(689) 00:37:49.102 fused_ordering(690) 00:37:49.102 fused_ordering(691) 00:37:49.102 fused_ordering(692) 00:37:49.102 fused_ordering(693) 00:37:49.102 fused_ordering(694) 00:37:49.102 fused_ordering(695) 00:37:49.102 fused_ordering(696) 00:37:49.102 fused_ordering(697) 00:37:49.102 fused_ordering(698) 00:37:49.102 fused_ordering(699) 00:37:49.102 fused_ordering(700) 00:37:49.102 fused_ordering(701) 00:37:49.102 fused_ordering(702) 00:37:49.102 fused_ordering(703) 00:37:49.102 fused_ordering(704) 00:37:49.102 fused_ordering(705) 00:37:49.102 fused_ordering(706) 00:37:49.102 fused_ordering(707) 00:37:49.102 fused_ordering(708) 00:37:49.102 fused_ordering(709) 00:37:49.102 fused_ordering(710) 00:37:49.102 fused_ordering(711) 00:37:49.102 fused_ordering(712) 00:37:49.102 fused_ordering(713) 00:37:49.102 fused_ordering(714) 00:37:49.102 fused_ordering(715) 00:37:49.102 fused_ordering(716) 00:37:49.102 fused_ordering(717) 00:37:49.102 fused_ordering(718) 00:37:49.102 fused_ordering(719) 00:37:49.102 fused_ordering(720) 00:37:49.102 fused_ordering(721) 00:37:49.102 fused_ordering(722) 00:37:49.102 fused_ordering(723) 00:37:49.102 fused_ordering(724) 00:37:49.102 fused_ordering(725) 00:37:49.102 fused_ordering(726) 00:37:49.102 fused_ordering(727) 00:37:49.102 fused_ordering(728) 00:37:49.102 fused_ordering(729) 00:37:49.102 fused_ordering(730) 00:37:49.102 fused_ordering(731) 00:37:49.102 fused_ordering(732) 00:37:49.102 fused_ordering(733) 00:37:49.102 fused_ordering(734) 00:37:49.102 fused_ordering(735) 00:37:49.102 fused_ordering(736) 00:37:49.102 fused_ordering(737) 00:37:49.102 fused_ordering(738) 00:37:49.102 fused_ordering(739) 00:37:49.102 fused_ordering(740) 00:37:49.102 fused_ordering(741) 00:37:49.102 fused_ordering(742) 00:37:49.102 fused_ordering(743) 00:37:49.102 fused_ordering(744) 00:37:49.102 fused_ordering(745) 00:37:49.102 fused_ordering(746) 00:37:49.102 fused_ordering(747) 00:37:49.102 fused_ordering(748) 00:37:49.102 fused_ordering(749) 00:37:49.102 fused_ordering(750) 00:37:49.102 fused_ordering(751) 00:37:49.102 fused_ordering(752) 00:37:49.102 fused_ordering(753) 00:37:49.102 fused_ordering(754) 00:37:49.102 fused_ordering(755) 00:37:49.102 fused_ordering(756) 00:37:49.102 fused_ordering(757) 00:37:49.102 fused_ordering(758) 00:37:49.102 fused_ordering(759) 00:37:49.102 fused_ordering(760) 00:37:49.102 fused_ordering(761) 00:37:49.102 fused_ordering(762) 00:37:49.102 fused_ordering(763) 00:37:49.102 fused_ordering(764) 00:37:49.102 fused_ordering(765) 00:37:49.102 fused_ordering(766) 00:37:49.102 fused_ordering(767) 00:37:49.102 fused_ordering(768) 00:37:49.102 fused_ordering(769) 00:37:49.102 fused_ordering(770) 00:37:49.102 fused_ordering(771) 00:37:49.102 fused_ordering(772) 00:37:49.102 fused_ordering(773) 00:37:49.102 fused_ordering(774) 00:37:49.102 fused_ordering(775) 00:37:49.102 fused_ordering(776) 00:37:49.102 fused_ordering(777) 00:37:49.102 fused_ordering(778) 00:37:49.102 fused_ordering(779) 00:37:49.102 fused_ordering(780) 00:37:49.102 fused_ordering(781) 00:37:49.102 fused_ordering(782) 00:37:49.102 fused_ordering(783) 00:37:49.102 fused_ordering(784) 00:37:49.102 fused_ordering(785) 00:37:49.102 fused_ordering(786) 00:37:49.102 fused_ordering(787) 00:37:49.102 fused_ordering(788) 00:37:49.102 fused_ordering(789) 00:37:49.102 fused_ordering(790) 00:37:49.102 fused_ordering(791) 00:37:49.102 fused_ordering(792) 00:37:49.102 fused_ordering(793) 00:37:49.102 fused_ordering(794) 00:37:49.102 fused_ordering(795) 00:37:49.102 fused_ordering(796) 00:37:49.102 fused_ordering(797) 00:37:49.102 fused_ordering(798) 00:37:49.102 fused_ordering(799) 00:37:49.102 fused_ordering(800) 00:37:49.102 fused_ordering(801) 00:37:49.102 fused_ordering(802) 00:37:49.102 fused_ordering(803) 00:37:49.102 fused_ordering(804) 00:37:49.102 fused_ordering(805) 00:37:49.102 fused_ordering(806) 00:37:49.102 fused_ordering(807) 00:37:49.102 fused_ordering(808) 00:37:49.102 fused_ordering(809) 00:37:49.102 fused_ordering(810) 00:37:49.102 fused_ordering(811) 00:37:49.102 fused_ordering(812) 00:37:49.102 fused_ordering(813) 00:37:49.102 fused_ordering(814) 00:37:49.102 fused_ordering(815) 00:37:49.102 fused_ordering(816) 00:37:49.102 fused_ordering(817) 00:37:49.102 fused_ordering(818) 00:37:49.102 fused_ordering(819) 00:37:49.102 fused_ordering(820) 00:37:49.670 fused_ordering(821) 00:37:49.670 fused_ordering(822) 00:37:49.670 fused_ordering(823) 00:37:49.670 fused_ordering(824) 00:37:49.670 fused_ordering(825) 00:37:49.670 fused_ordering(826) 00:37:49.670 fused_ordering(827) 00:37:49.670 fused_ordering(828) 00:37:49.670 fused_ordering(829) 00:37:49.670 fused_ordering(830) 00:37:49.670 fused_ordering(831) 00:37:49.670 fused_ordering(832) 00:37:49.670 fused_ordering(833) 00:37:49.670 fused_ordering(834) 00:37:49.670 fused_ordering(835) 00:37:49.670 fused_ordering(836) 00:37:49.670 fused_ordering(837) 00:37:49.670 fused_ordering(838) 00:37:49.670 fused_ordering(839) 00:37:49.670 fused_ordering(840) 00:37:49.670 fused_ordering(841) 00:37:49.670 fused_ordering(842) 00:37:49.670 fused_ordering(843) 00:37:49.670 fused_ordering(844) 00:37:49.670 fused_ordering(845) 00:37:49.670 fused_ordering(846) 00:37:49.670 fused_ordering(847) 00:37:49.670 fused_ordering(848) 00:37:49.670 fused_ordering(849) 00:37:49.670 fused_ordering(850) 00:37:49.670 fused_ordering(851) 00:37:49.670 fused_ordering(852) 00:37:49.670 fused_ordering(853) 00:37:49.670 fused_ordering(854) 00:37:49.670 fused_ordering(855) 00:37:49.670 fused_ordering(856) 00:37:49.670 fused_ordering(857) 00:37:49.670 fused_ordering(858) 00:37:49.670 fused_ordering(859) 00:37:49.670 fused_ordering(860) 00:37:49.670 fused_ordering(861) 00:37:49.670 fused_ordering(862) 00:37:49.670 fused_ordering(863) 00:37:49.670 fused_ordering(864) 00:37:49.670 fused_ordering(865) 00:37:49.670 fused_ordering(866) 00:37:49.670 fused_ordering(867) 00:37:49.670 fused_ordering(868) 00:37:49.670 fused_ordering(869) 00:37:49.670 fused_ordering(870) 00:37:49.670 fused_ordering(871) 00:37:49.670 fused_ordering(872) 00:37:49.670 fused_ordering(873) 00:37:49.670 fused_ordering(874) 00:37:49.670 fused_ordering(875) 00:37:49.670 fused_ordering(876) 00:37:49.670 fused_ordering(877) 00:37:49.670 fused_ordering(878) 00:37:49.670 fused_ordering(879) 00:37:49.670 fused_ordering(880) 00:37:49.670 fused_ordering(881) 00:37:49.670 fused_ordering(882) 00:37:49.670 fused_ordering(883) 00:37:49.670 fused_ordering(884) 00:37:49.670 fused_ordering(885) 00:37:49.670 fused_ordering(886) 00:37:49.670 fused_ordering(887) 00:37:49.670 fused_ordering(888) 00:37:49.670 fused_ordering(889) 00:37:49.670 fused_ordering(890) 00:37:49.670 fused_ordering(891) 00:37:49.670 fused_ordering(892) 00:37:49.670 fused_ordering(893) 00:37:49.670 fused_ordering(894) 00:37:49.670 fused_ordering(895) 00:37:49.670 fused_ordering(896) 00:37:49.670 fused_ordering(897) 00:37:49.670 fused_ordering(898) 00:37:49.670 fused_ordering(899) 00:37:49.670 fused_ordering(900) 00:37:49.670 fused_ordering(901) 00:37:49.670 fused_ordering(902) 00:37:49.670 fused_ordering(903) 00:37:49.670 fused_ordering(904) 00:37:49.670 fused_ordering(905) 00:37:49.670 fused_ordering(906) 00:37:49.670 fused_ordering(907) 00:37:49.670 fused_ordering(908) 00:37:49.670 fused_ordering(909) 00:37:49.670 fused_ordering(910) 00:37:49.670 fused_ordering(911) 00:37:49.670 fused_ordering(912) 00:37:49.670 fused_ordering(913) 00:37:49.670 fused_ordering(914) 00:37:49.670 fused_ordering(915) 00:37:49.670 fused_ordering(916) 00:37:49.670 fused_ordering(917) 00:37:49.670 fused_ordering(918) 00:37:49.670 fused_ordering(919) 00:37:49.670 fused_ordering(920) 00:37:49.670 fused_ordering(921) 00:37:49.670 fused_ordering(922) 00:37:49.670 fused_ordering(923) 00:37:49.670 fused_ordering(924) 00:37:49.670 fused_ordering(925) 00:37:49.670 fused_ordering(926) 00:37:49.670 fused_ordering(927) 00:37:49.670 fused_ordering(928) 00:37:49.670 fused_ordering(929) 00:37:49.670 fused_ordering(930) 00:37:49.670 fused_ordering(931) 00:37:49.670 fused_ordering(932) 00:37:49.670 fused_ordering(933) 00:37:49.670 fused_ordering(934) 00:37:49.670 fused_ordering(935) 00:37:49.670 fused_ordering(936) 00:37:49.670 fused_ordering(937) 00:37:49.670 fused_ordering(938) 00:37:49.671 fused_ordering(939) 00:37:49.671 fused_ordering(940) 00:37:49.671 fused_ordering(941) 00:37:49.671 fused_ordering(942) 00:37:49.671 fused_ordering(943) 00:37:49.671 fused_ordering(944) 00:37:49.671 fused_ordering(945) 00:37:49.671 fused_ordering(946) 00:37:49.671 fused_ordering(947) 00:37:49.671 fused_ordering(948) 00:37:49.671 fused_ordering(949) 00:37:49.671 fused_ordering(950) 00:37:49.671 fused_ordering(951) 00:37:49.671 fused_ordering(952) 00:37:49.671 fused_ordering(953) 00:37:49.671 fused_ordering(954) 00:37:49.671 fused_ordering(955) 00:37:49.671 fused_ordering(956) 00:37:49.671 fused_ordering(957) 00:37:49.671 fused_ordering(958) 00:37:49.671 fused_ordering(959) 00:37:49.671 fused_ordering(960) 00:37:49.671 fused_ordering(961) 00:37:49.671 fused_ordering(962) 00:37:49.671 fused_ordering(963) 00:37:49.671 fused_ordering(964) 00:37:49.671 fused_ordering(965) 00:37:49.671 fused_ordering(966) 00:37:49.671 fused_ordering(967) 00:37:49.671 fused_ordering(968) 00:37:49.671 fused_ordering(969) 00:37:49.671 fused_ordering(970) 00:37:49.671 fused_ordering(971) 00:37:49.671 fused_ordering(972) 00:37:49.671 fused_ordering(973) 00:37:49.671 fused_ordering(974) 00:37:49.671 fused_ordering(975) 00:37:49.671 fused_ordering(976) 00:37:49.671 fused_ordering(977) 00:37:49.671 fused_ordering(978) 00:37:49.671 fused_ordering(979) 00:37:49.671 fused_ordering(980) 00:37:49.671 fused_ordering(981) 00:37:49.671 fused_ordering(982) 00:37:49.671 fused_ordering(983) 00:37:49.671 fused_ordering(984) 00:37:49.671 fused_ordering(985) 00:37:49.671 fused_ordering(986) 00:37:49.671 fused_ordering(987) 00:37:49.671 fused_ordering(988) 00:37:49.671 fused_ordering(989) 00:37:49.671 fused_ordering(990) 00:37:49.671 fused_ordering(991) 00:37:49.671 fused_ordering(992) 00:37:49.671 fused_ordering(993) 00:37:49.671 fused_ordering(994) 00:37:49.671 fused_ordering(995) 00:37:49.671 fused_ordering(996) 00:37:49.671 fused_ordering(997) 00:37:49.671 fused_ordering(998) 00:37:49.671 fused_ordering(999) 00:37:49.671 fused_ordering(1000) 00:37:49.671 fused_ordering(1001) 00:37:49.671 fused_ordering(1002) 00:37:49.671 fused_ordering(1003) 00:37:49.671 fused_ordering(1004) 00:37:49.671 fused_ordering(1005) 00:37:49.671 fused_ordering(1006) 00:37:49.671 fused_ordering(1007) 00:37:49.671 fused_ordering(1008) 00:37:49.671 fused_ordering(1009) 00:37:49.671 fused_ordering(1010) 00:37:49.671 fused_ordering(1011) 00:37:49.671 fused_ordering(1012) 00:37:49.671 fused_ordering(1013) 00:37:49.671 fused_ordering(1014) 00:37:49.671 fused_ordering(1015) 00:37:49.671 fused_ordering(1016) 00:37:49.671 fused_ordering(1017) 00:37:49.671 fused_ordering(1018) 00:37:49.671 fused_ordering(1019) 00:37:49.671 fused_ordering(1020) 00:37:49.671 fused_ordering(1021) 00:37:49.671 fused_ordering(1022) 00:37:49.671 fused_ordering(1023) 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:49.671 rmmod nvme_tcp 00:37:49.671 rmmod nvme_fabrics 00:37:49.671 rmmod nvme_keyring 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2100281 ']' 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2100281 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 2100281 ']' 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 2100281 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2100281 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2100281' 00:37:49.671 killing process with pid 2100281 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 2100281 00:37:49.671 03:34:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 2100281 00:37:49.671 03:34:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:49.671 03:34:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:49.671 03:34:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:49.671 03:34:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:49.671 03:34:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:49.671 03:34:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.671 03:34:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:49.671 03:34:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:52.204 03:34:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:52.204 00:37:52.204 real 0m10.838s 00:37:52.204 user 0m5.038s 00:37:52.204 sys 0m6.009s 00:37:52.204 03:34:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:37:52.204 03:34:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:37:52.204 ************************************ 00:37:52.204 END TEST nvmf_fused_ordering 00:37:52.204 ************************************ 00:37:52.204 03:34:33 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:37:52.204 03:34:33 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:37:52.204 03:34:33 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:37:52.204 03:34:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:52.204 ************************************ 00:37:52.204 START TEST nvmf_delete_subsystem 00:37:52.204 ************************************ 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:37:52.204 * Looking for test storage... 00:37:52.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:37:52.204 03:34:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:58.767 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:58.767 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:58.767 Found net devices under 0000:86:00.0: cvl_0_0 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:58.767 Found net devices under 0000:86:00.1: cvl_0_1 00:37:58.767 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:58.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:58.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:37:58.768 00:37:58.768 --- 10.0.0.2 ping statistics --- 00:37:58.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.768 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:58.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:58.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:37:58.768 00:37:58.768 --- 10.0.0.1 ping statistics --- 00:37:58.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.768 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2104559 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2104559 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 2104559 ']' 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:37:58.768 03:34:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:58.768 [2024-06-11 03:34:39.913209] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:37:58.768 [2024-06-11 03:34:39.913250] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:58.768 EAL: No free 2048 kB hugepages reported on node 1 00:37:58.768 [2024-06-11 03:34:39.974397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:58.768 [2024-06-11 03:34:40.014174] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:58.768 [2024-06-11 03:34:40.014213] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:58.768 [2024-06-11 03:34:40.014221] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:58.768 [2024-06-11 03:34:40.014227] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:58.768 [2024-06-11 03:34:40.014232] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:58.768 [2024-06-11 03:34:40.014278] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:37:58.768 [2024-06-11 03:34:40.014280] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:58.768 [2024-06-11 03:34:40.139703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:58.768 [2024-06-11 03:34:40.155858] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:58.768 NULL1 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:58.768 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:58.769 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:59.026 Delay0 00:37:59.026 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.026 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:59.026 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:59.026 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:59.026 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:59.026 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2104585 00:37:59.026 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:59.026 03:34:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:59.026 EAL: No free 2048 kB hugepages reported on node 1 00:37:59.026 [2024-06-11 03:34:40.230418] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:00.925 03:34:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:00.925 03:34:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:00.925 03:34:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 [2024-06-11 03:34:42.267556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354670 is same with the state(5) to be set 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 starting I/O failed: -6 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 [2024-06-11 03:34:42.269509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0920000c00 is same with the state(5) to be set 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Write completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.925 [2024-06-11 03:34:42.269868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f092000cfe0 is same with the state(5) to be set 00:38:00.925 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Write completed with error (sct=0, sc=8) 00:38:00.926 Write completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Write completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Write completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Write completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Write completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Write completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 Read completed with error (sct=0, sc=8) 00:38:00.926 [2024-06-11 03:34:42.270110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f092000d600 is same with the state(5) to be set 00:38:01.858 [2024-06-11 03:34:43.243865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2357900 is same with the state(5) to be set 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 [2024-06-11 03:34:43.269510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f092000d2f0 is same with the state(5) to be set 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 [2024-06-11 03:34:43.271698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354180 is same with the state(5) to be set 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 [2024-06-11 03:34:43.271932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354360 is same with the state(5) to be set 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Write completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 Read completed with error (sct=0, sc=8) 00:38:02.117 [2024-06-11 03:34:43.272089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2354980 is same with the state(5) to be set 00:38:02.117 Initializing NVMe Controllers 00:38:02.117 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:02.117 Controller IO queue size 128, less than required. 00:38:02.117 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:02.117 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:02.117 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:02.117 Initialization complete. Launching workers. 00:38:02.117 ======================================================== 00:38:02.117 Latency(us) 00:38:02.117 Device Information : IOPS MiB/s Average min max 00:38:02.117 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 180.02 0.09 957336.14 1305.45 1009684.17 00:38:02.117 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 145.70 0.07 916760.52 362.24 1010612.14 00:38:02.117 ======================================================== 00:38:02.117 Total : 325.72 0.16 939185.52 362.24 1010612.14 00:38:02.117 00:38:02.118 [2024-06-11 03:34:43.272626] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2357900 (9): Bad file descriptor 00:38:02.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:02.118 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:02.118 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:02.118 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2104585 00:38:02.118 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:02.376 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:02.376 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2104585 00:38:02.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2104585) - No such process 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2104585 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 2104585 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 2104585 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:02.634 [2024-06-11 03:34:43.800763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2105274 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2105274 00:38:02.634 03:34:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:02.634 EAL: No free 2048 kB hugepages reported on node 1 00:38:02.634 [2024-06-11 03:34:43.861634] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:03.201 03:34:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:03.201 03:34:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2105274 00:38:03.201 03:34:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:03.458 03:34:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:03.458 03:34:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2105274 00:38:03.458 03:34:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:04.025 03:34:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:04.025 03:34:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2105274 00:38:04.025 03:34:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:04.591 03:34:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:04.591 03:34:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2105274 00:38:04.591 03:34:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:05.156 03:34:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:05.156 03:34:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2105274 00:38:05.156 03:34:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:05.722 03:34:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:05.722 03:34:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2105274 00:38:05.722 03:34:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:05.722 Initializing NVMe Controllers 00:38:05.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:05.722 Controller IO queue size 128, less than required. 00:38:05.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:05.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:05.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:05.722 Initialization complete. Launching workers. 00:38:05.722 ======================================================== 00:38:05.722 Latency(us) 00:38:05.722 Device Information : IOPS MiB/s Average min max 00:38:05.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003655.84 1000153.31 1044714.19 00:38:05.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006025.72 1000248.07 1042087.88 00:38:05.722 ======================================================== 00:38:05.722 Total : 256.00 0.12 1004840.78 1000153.31 1044714.19 00:38:05.722 00:38:05.980 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:05.980 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2105274 00:38:05.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2105274) - No such process 00:38:05.980 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2105274 00:38:05.980 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:38:05.980 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:38:05.980 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:05.980 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:38:05.980 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:05.980 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:38:05.980 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:05.980 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:05.980 rmmod nvme_tcp 00:38:05.980 rmmod nvme_fabrics 00:38:06.239 rmmod nvme_keyring 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2104559 ']' 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2104559 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 2104559 ']' 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 2104559 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2104559 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2104559' 00:38:06.239 killing process with pid 2104559 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 2104559 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 2104559 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:06.239 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:06.497 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:06.497 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:06.497 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:06.497 03:34:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:08.421 03:34:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:08.421 00:38:08.421 real 0m16.501s 00:38:08.421 user 0m29.090s 00:38:08.421 sys 0m5.621s 00:38:08.421 03:34:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:08.421 03:34:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:08.421 ************************************ 00:38:08.421 END TEST nvmf_delete_subsystem 00:38:08.421 ************************************ 00:38:08.421 03:34:49 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:38:08.421 03:34:49 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:08.421 03:34:49 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:08.421 03:34:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:08.421 ************************************ 00:38:08.421 START TEST nvmf_ns_masking 00:38:08.421 ************************************ 00:38:08.421 03:34:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:38:08.680 * Looking for test storage... 00:38:08.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=86b30b48-ce18-4362-86e0-745bd2f1306f 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:08.680 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:08.681 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:08.681 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:08.681 03:34:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:08.681 03:34:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:08.681 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:08.681 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:08.681 03:34:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:38:08.681 03:34:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:38:15.268 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:15.268 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:38:15.268 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:15.268 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:15.268 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:15.268 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:15.268 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:15.268 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:15.269 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:15.269 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:15.269 Found net devices under 0000:86:00.0: cvl_0_0 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:15.269 Found net devices under 0000:86:00.1: cvl_0_1 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:15.269 03:34:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:15.269 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:15.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:15.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:38:15.270 00:38:15.270 --- 10.0.0.2 ping statistics --- 00:38:15.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:15.270 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:15.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:15.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:38:15.270 00:38:15.270 --- 10.0.0.1 ping statistics --- 00:38:15.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:15.270 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2109775 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2109775 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 2109775 ']' 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:15.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:15.270 03:34:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:38:15.270 [2024-06-11 03:34:56.330952] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:38:15.270 [2024-06-11 03:34:56.330993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:15.270 EAL: No free 2048 kB hugepages reported on node 1 00:38:15.270 [2024-06-11 03:34:56.393405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:15.270 [2024-06-11 03:34:56.434561] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:15.270 [2024-06-11 03:34:56.434602] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:15.270 [2024-06-11 03:34:56.434608] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:15.270 [2024-06-11 03:34:56.434614] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:15.270 [2024-06-11 03:34:56.434618] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:15.270 [2024-06-11 03:34:56.434711] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:15.270 [2024-06-11 03:34:56.434830] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:38:15.270 [2024-06-11 03:34:56.434897] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:38:15.270 [2024-06-11 03:34:56.434898] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:15.836 03:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:15.836 03:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:38:15.836 03:34:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:15.836 03:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:15.836 03:34:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:38:15.836 03:34:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:15.836 03:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:16.094 [2024-06-11 03:34:57.328701] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:16.094 03:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:38:16.094 03:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:38:16.094 03:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:38:16.352 Malloc1 00:38:16.353 03:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:38:16.353 Malloc2 00:38:16.353 03:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:16.611 03:34:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:38:16.869 03:34:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:16.869 [2024-06-11 03:34:58.232801] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:16.869 03:34:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:38:16.869 03:34:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 86b30b48-ce18-4362-86e0-745bd2f1306f -a 10.0.0.2 -s 4420 -i 4 00:38:17.128 03:34:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:38:17.128 03:34:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:38:17.128 03:34:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:38:17.128 03:34:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:38:17.128 03:34:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:38:19.030 03:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:38:19.030 03:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:38:19.030 03:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:38:19.030 03:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:38:19.030 03:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:38:19.030 03:35:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:38:19.030 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:38:19.030 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:38:19.288 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:38:19.288 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:38:19.288 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:38:19.288 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:38:19.288 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:38:19.288 [ 0]:0x1 00:38:19.288 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:38:19.288 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:38:19.288 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a7cc9d371d684b3cb69df08046822f76 00:38:19.289 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a7cc9d371d684b3cb69df08046822f76 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:38:19.289 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:38:19.289 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:38:19.289 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:38:19.289 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:38:19.289 [ 0]:0x1 00:38:19.289 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:38:19.289 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:38:19.547 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a7cc9d371d684b3cb69df08046822f76 00:38:19.547 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a7cc9d371d684b3cb69df08046822f76 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:38:19.547 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:38:19.547 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:38:19.547 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:38:19.547 [ 1]:0x2 00:38:19.547 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:38:19.547 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:38:19.547 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b89e2fa0688e45e4a8c0606a7eb0ef36 00:38:19.547 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b89e2fa0688e45e4a8c0606a7eb0ef36 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:38:19.547 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:38:19.547 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:19.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:19.547 03:35:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.806 03:35:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:38:19.806 03:35:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:38:19.806 03:35:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 86b30b48-ce18-4362-86e0-745bd2f1306f -a 10.0.0.2 -s 4420 -i 4 00:38:20.064 03:35:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:38:20.064 03:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:38:20.064 03:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:38:20.064 03:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:38:20.064 03:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:38:20.064 03:35:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:38:21.965 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:38:21.965 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:38:21.965 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:38:21.965 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:38:21.965 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:38:21.965 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:38:21.965 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:38:21.965 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:38:22.224 [ 0]:0x2 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b89e2fa0688e45e4a8c0606a7eb0ef36 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b89e2fa0688e45e4a8c0606a7eb0ef36 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:38:22.224 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:38:22.483 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:38:22.483 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:38:22.483 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:38:22.483 [ 0]:0x1 00:38:22.483 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:38:22.483 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:38:22.483 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a7cc9d371d684b3cb69df08046822f76 00:38:22.483 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a7cc9d371d684b3cb69df08046822f76 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:38:22.483 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:38:22.483 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:38:22.483 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:38:22.483 [ 1]:0x2 00:38:22.483 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:38:22.483 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:38:22.483 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b89e2fa0688e45e4a8c0606a7eb0ef36 00:38:22.483 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b89e2fa0688e45e4a8c0606a7eb0ef36 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:38:22.483 03:35:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:38:22.742 [ 0]:0x2 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b89e2fa0688e45e4a8c0606a7eb0ef36 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b89e2fa0688e45e4a8c0606a7eb0ef36 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:38:22.742 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:23.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:23.001 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:38:23.260 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:38:23.260 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 86b30b48-ce18-4362-86e0-745bd2f1306f -a 10.0.0.2 -s 4420 -i 4 00:38:23.260 03:35:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:38:23.260 03:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:38:23.260 03:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:38:23.260 03:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:38:23.260 03:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:38:23.260 03:35:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:38:25.793 [ 0]:0x1 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a7cc9d371d684b3cb69df08046822f76 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a7cc9d371d684b3cb69df08046822f76 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:38:25.793 [ 1]:0x2 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b89e2fa0688e45e4a8c0606a7eb0ef36 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b89e2fa0688e45e4a8c0606a7eb0ef36 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:38:25.793 03:35:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:38:25.793 [ 0]:0x2 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:38:25.793 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b89e2fa0688e45e4a8c0606a7eb0ef36 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b89e2fa0688e45e4a8c0606a7eb0ef36 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:38:26.052 [2024-06-11 03:35:07.362496] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:38:26.052 request: 00:38:26.052 { 00:38:26.052 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:26.052 "nsid": 2, 00:38:26.052 "host": "nqn.2016-06.io.spdk:host1", 00:38:26.052 "method": "nvmf_ns_remove_host", 00:38:26.052 "req_id": 1 00:38:26.052 } 00:38:26.052 Got JSON-RPC error response 00:38:26.052 response: 00:38:26.052 { 00:38:26.052 "code": -32602, 00:38:26.052 "message": "Invalid parameters" 00:38:26.052 } 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:38:26.052 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:38:26.052 [ 0]:0x2 00:38:26.311 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:38:26.311 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:38:26.311 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=b89e2fa0688e45e4a8c0606a7eb0ef36 00:38:26.311 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ b89e2fa0688e45e4a8c0606a7eb0ef36 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:38:26.311 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:38:26.311 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:26.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:26.311 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:26.311 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:38:26.311 03:35:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:38:26.311 03:35:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:26.311 03:35:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:38:26.569 03:35:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:26.569 03:35:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:38:26.569 03:35:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:26.569 03:35:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:26.569 rmmod nvme_tcp 00:38:26.569 rmmod nvme_fabrics 00:38:26.569 rmmod nvme_keyring 00:38:26.569 03:35:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:26.569 03:35:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:38:26.569 03:35:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:38:26.569 03:35:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2109775 ']' 00:38:26.569 03:35:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2109775 00:38:26.569 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 2109775 ']' 00:38:26.570 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 2109775 00:38:26.570 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:38:26.570 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:26.570 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2109775 00:38:26.570 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:26.570 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:26.570 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2109775' 00:38:26.570 killing process with pid 2109775 00:38:26.570 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 2109775 00:38:26.570 03:35:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 2109775 00:38:26.828 03:35:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:26.828 03:35:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:26.828 03:35:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:26.828 03:35:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:26.828 03:35:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:26.828 03:35:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:26.828 03:35:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:26.828 03:35:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:28.732 03:35:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:28.732 00:38:28.732 real 0m20.322s 00:38:28.732 user 0m50.652s 00:38:28.732 sys 0m6.354s 00:38:28.732 03:35:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:28.732 03:35:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:38:28.732 ************************************ 00:38:28.732 END TEST nvmf_ns_masking 00:38:28.732 ************************************ 00:38:28.991 03:35:10 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:38:28.991 03:35:10 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:38:28.991 03:35:10 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:28.991 03:35:10 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:28.991 03:35:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:28.991 ************************************ 00:38:28.991 START TEST nvmf_nvme_cli 00:38:28.991 ************************************ 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:38:28.991 * Looking for test storage... 00:38:28.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:28.991 03:35:10 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:38:28.992 03:35:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:38:35.563 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:35.563 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:38:35.563 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:35.563 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:35.563 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:35.563 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:35.563 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:35.563 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:38:35.563 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:35.563 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:38:35.563 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:38:35.563 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:38:35.563 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:38:35.563 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:35.564 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:35.564 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:35.564 Found net devices under 0000:86:00.0: cvl_0_0 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:35.564 Found net devices under 0000:86:00.1: cvl_0_1 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:35.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:35.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:38:35.564 00:38:35.564 --- 10.0.0.2 ping statistics --- 00:38:35.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:35.564 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:35.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:35.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:38:35.564 00:38:35.564 --- 10.0.0.1 ping statistics --- 00:38:35.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:35.564 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2115805 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2115805 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 2115805 ']' 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:35.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:35.564 03:35:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:38:35.564 [2024-06-11 03:35:16.853551] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:38:35.564 [2024-06-11 03:35:16.853593] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:35.564 EAL: No free 2048 kB hugepages reported on node 1 00:38:35.564 [2024-06-11 03:35:16.915643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:35.564 [2024-06-11 03:35:16.957682] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:35.564 [2024-06-11 03:35:16.957719] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:35.564 [2024-06-11 03:35:16.957726] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:35.564 [2024-06-11 03:35:16.957732] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:35.564 [2024-06-11 03:35:16.957737] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:35.564 [2024-06-11 03:35:16.957775] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:35.564 [2024-06-11 03:35:16.957877] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:38:35.564 [2024-06-11 03:35:16.957970] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:38:35.564 [2024-06-11 03:35:16.957971] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:38:35.850 [2024-06-11 03:35:17.102049] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:38:35.850 Malloc0 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:38:35.850 Malloc1 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:38:35.850 [2024-06-11 03:35:17.179382] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:35.850 03:35:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:38:36.108 00:38:36.108 Discovery Log Number of Records 2, Generation counter 2 00:38:36.108 =====Discovery Log Entry 0====== 00:38:36.108 trtype: tcp 00:38:36.108 adrfam: ipv4 00:38:36.108 subtype: current discovery subsystem 00:38:36.108 treq: not required 00:38:36.108 portid: 0 00:38:36.108 trsvcid: 4420 00:38:36.108 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:36.108 traddr: 10.0.0.2 00:38:36.108 eflags: explicit discovery connections, duplicate discovery information 00:38:36.108 sectype: none 00:38:36.108 =====Discovery Log Entry 1====== 00:38:36.108 trtype: tcp 00:38:36.108 adrfam: ipv4 00:38:36.108 subtype: nvme subsystem 00:38:36.108 treq: not required 00:38:36.108 portid: 0 00:38:36.108 trsvcid: 4420 00:38:36.108 subnqn: nqn.2016-06.io.spdk:cnode1 00:38:36.108 traddr: 10.0.0.2 00:38:36.108 eflags: none 00:38:36.108 sectype: none 00:38:36.108 03:35:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:38:36.108 03:35:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:38:36.108 03:35:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:38:36.108 03:35:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:38:36.108 03:35:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:38:36.108 03:35:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:38:36.108 03:35:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:38:36.108 03:35:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:38:36.108 03:35:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:38:36.108 03:35:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:38:36.109 03:35:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:37.483 03:35:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:38:37.483 03:35:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:38:37.483 03:35:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:38:37.483 03:35:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:38:37.483 03:35:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:38:37.483 03:35:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:38:39.413 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:38:39.413 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:38:39.413 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:38:39.414 /dev/nvme0n1 ]] 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:39.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:39.414 rmmod nvme_tcp 00:38:39.414 rmmod nvme_fabrics 00:38:39.414 rmmod nvme_keyring 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2115805 ']' 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2115805 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 2115805 ']' 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 2115805 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2115805 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2115805' 00:38:39.414 killing process with pid 2115805 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 2115805 00:38:39.414 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 2115805 00:38:39.674 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:39.674 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:39.674 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:39.674 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:39.674 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:39.674 03:35:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:39.674 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:39.674 03:35:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:42.208 03:35:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:42.208 00:38:42.208 real 0m12.888s 00:38:42.208 user 0m17.796s 00:38:42.208 sys 0m5.402s 00:38:42.208 03:35:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:38:42.208 03:35:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:38:42.208 ************************************ 00:38:42.208 END TEST nvmf_nvme_cli 00:38:42.208 ************************************ 00:38:42.208 03:35:23 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:38:42.208 03:35:23 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:38:42.208 03:35:23 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:38:42.208 03:35:23 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:38:42.208 03:35:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:42.208 ************************************ 00:38:42.208 START TEST nvmf_vfio_user 00:38:42.208 ************************************ 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:38:42.208 * Looking for test storage... 00:38:42.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:42.208 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2116877 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2116877' 00:38:42.209 Process pid: 2116877 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2116877 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 2116877 ']' 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:42.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:38:42.209 [2024-06-11 03:35:23.294943] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:38:42.209 [2024-06-11 03:35:23.294988] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:42.209 EAL: No free 2048 kB hugepages reported on node 1 00:38:42.209 [2024-06-11 03:35:23.355795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:42.209 [2024-06-11 03:35:23.398006] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:42.209 [2024-06-11 03:35:23.398042] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:42.209 [2024-06-11 03:35:23.398049] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:42.209 [2024-06-11 03:35:23.398055] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:42.209 [2024-06-11 03:35:23.398060] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:42.209 [2024-06-11 03:35:23.398095] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:38:42.209 [2024-06-11 03:35:23.398192] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:38:42.209 [2024-06-11 03:35:23.398277] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:38:42.209 [2024-06-11 03:35:23.398278] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:38:42.209 03:35:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:38:43.145 03:35:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:38:43.402 03:35:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:38:43.403 03:35:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:38:43.403 03:35:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:38:43.403 03:35:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:38:43.403 03:35:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:38:43.661 Malloc1 00:38:43.661 03:35:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:38:43.920 03:35:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:38:43.920 03:35:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:38:44.179 03:35:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:38:44.179 03:35:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:38:44.179 03:35:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:38:44.437 Malloc2 00:38:44.437 03:35:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:38:44.437 03:35:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:38:44.696 03:35:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:38:44.956 03:35:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:38:44.956 03:35:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:38:44.956 03:35:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:38:44.956 03:35:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:38:44.956 03:35:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:38:44.956 03:35:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:38:44.956 [2024-06-11 03:35:26.184611] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:38:44.956 [2024-06-11 03:35:26.184644] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117354 ] 00:38:44.956 EAL: No free 2048 kB hugepages reported on node 1 00:38:44.956 [2024-06-11 03:35:26.214316] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:38:44.956 [2024-06-11 03:35:26.219273] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:38:44.956 [2024-06-11 03:35:26.219292] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6ee5f81000 00:38:44.956 [2024-06-11 03:35:26.220268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:38:44.956 [2024-06-11 03:35:26.221271] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:38:44.956 [2024-06-11 03:35:26.222278] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:38:44.956 [2024-06-11 03:35:26.223289] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:38:44.956 [2024-06-11 03:35:26.224291] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:38:44.956 [2024-06-11 03:35:26.225298] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:38:44.956 [2024-06-11 03:35:26.226299] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:38:44.956 [2024-06-11 03:35:26.227307] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:38:44.956 [2024-06-11 03:35:26.228315] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:38:44.956 [2024-06-11 03:35:26.228327] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6ee4d48000 00:38:44.956 [2024-06-11 03:35:26.229389] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:38:44.956 [2024-06-11 03:35:26.239707] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:38:44.956 [2024-06-11 03:35:26.239732] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:38:44.956 [2024-06-11 03:35:26.248433] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:38:44.956 [2024-06-11 03:35:26.248472] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:38:44.956 [2024-06-11 03:35:26.248541] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:38:44.956 [2024-06-11 03:35:26.248557] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:38:44.956 [2024-06-11 03:35:26.248562] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:38:44.956 [2024-06-11 03:35:26.249437] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:38:44.956 [2024-06-11 03:35:26.249447] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:38:44.956 [2024-06-11 03:35:26.249454] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:38:44.956 [2024-06-11 03:35:26.250445] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:38:44.957 [2024-06-11 03:35:26.250455] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:38:44.957 [2024-06-11 03:35:26.250462] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:38:44.957 [2024-06-11 03:35:26.251447] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:38:44.957 [2024-06-11 03:35:26.251454] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:38:44.957 [2024-06-11 03:35:26.252451] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:38:44.957 [2024-06-11 03:35:26.252460] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:38:44.957 [2024-06-11 03:35:26.252465] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:38:44.957 [2024-06-11 03:35:26.252470] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:38:44.957 [2024-06-11 03:35:26.252575] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:38:44.957 [2024-06-11 03:35:26.252579] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:38:44.957 [2024-06-11 03:35:26.252584] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:38:44.957 [2024-06-11 03:35:26.253458] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:38:44.957 [2024-06-11 03:35:26.254459] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:38:44.957 [2024-06-11 03:35:26.255466] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:38:44.957 [2024-06-11 03:35:26.256466] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:38:44.957 [2024-06-11 03:35:26.256529] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:38:44.957 [2024-06-11 03:35:26.257478] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:38:44.957 [2024-06-11 03:35:26.257485] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:38:44.957 [2024-06-11 03:35:26.257490] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257508] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:38:44.957 [2024-06-11 03:35:26.257515] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257531] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:38:44.957 [2024-06-11 03:35:26.257536] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:38:44.957 [2024-06-11 03:35:26.257547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:38:44.957 [2024-06-11 03:35:26.257584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:38:44.957 [2024-06-11 03:35:26.257593] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:38:44.957 [2024-06-11 03:35:26.257598] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:38:44.957 [2024-06-11 03:35:26.257603] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:38:44.957 [2024-06-11 03:35:26.257607] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:38:44.957 [2024-06-11 03:35:26.257611] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:38:44.957 [2024-06-11 03:35:26.257615] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:38:44.957 [2024-06-11 03:35:26.257619] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257625] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257634] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:38:44.957 [2024-06-11 03:35:26.257647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:38:44.957 [2024-06-11 03:35:26.257657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:38:44.957 [2024-06-11 03:35:26.257664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:38:44.957 [2024-06-11 03:35:26.257671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:38:44.957 [2024-06-11 03:35:26.257680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:38:44.957 [2024-06-11 03:35:26.257684] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257692] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:38:44.957 [2024-06-11 03:35:26.257710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:38:44.957 [2024-06-11 03:35:26.257715] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:38:44.957 [2024-06-11 03:35:26.257719] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257725] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257732] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:38:44.957 [2024-06-11 03:35:26.257751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:38:44.957 [2024-06-11 03:35:26.257792] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257798] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257805] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:38:44.957 [2024-06-11 03:35:26.257809] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:38:44.957 [2024-06-11 03:35:26.257814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:38:44.957 [2024-06-11 03:35:26.257829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:38:44.957 [2024-06-11 03:35:26.257836] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:38:44.957 [2024-06-11 03:35:26.257846] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257853] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257858] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:38:44.957 [2024-06-11 03:35:26.257862] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:38:44.957 [2024-06-11 03:35:26.257868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:38:44.957 [2024-06-11 03:35:26.257881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:38:44.957 [2024-06-11 03:35:26.257893] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257901] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257906] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:38:44.957 [2024-06-11 03:35:26.257910] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:38:44.957 [2024-06-11 03:35:26.257916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:38:44.957 [2024-06-11 03:35:26.257929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:38:44.957 [2024-06-11 03:35:26.257936] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257941] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257948] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:38:44.957 [2024-06-11 03:35:26.257952] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:38:44.958 [2024-06-11 03:35:26.257957] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:38:44.958 [2024-06-11 03:35:26.257961] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:38:44.958 [2024-06-11 03:35:26.257965] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:38:44.958 [2024-06-11 03:35:26.257969] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:38:44.958 [2024-06-11 03:35:26.257988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:38:44.958 [2024-06-11 03:35:26.257998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:38:44.958 [2024-06-11 03:35:26.258014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:38:44.958 [2024-06-11 03:35:26.258021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:38:44.958 [2024-06-11 03:35:26.258030] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:38:44.958 [2024-06-11 03:35:26.258039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:38:44.958 [2024-06-11 03:35:26.258049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:38:44.958 [2024-06-11 03:35:26.258056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:38:44.958 [2024-06-11 03:35:26.258064] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:38:44.958 [2024-06-11 03:35:26.258068] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:38:44.958 [2024-06-11 03:35:26.258071] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:38:44.958 [2024-06-11 03:35:26.258074] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:38:44.958 [2024-06-11 03:35:26.258080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:38:44.958 [2024-06-11 03:35:26.258087] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:38:44.958 [2024-06-11 03:35:26.258091] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:38:44.958 [2024-06-11 03:35:26.258097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:38:44.958 [2024-06-11 03:35:26.258102] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:38:44.958 [2024-06-11 03:35:26.258106] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:38:44.958 [2024-06-11 03:35:26.258111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:38:44.958 [2024-06-11 03:35:26.258117] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:38:44.958 [2024-06-11 03:35:26.258121] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:38:44.958 [2024-06-11 03:35:26.258126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:38:44.958 [2024-06-11 03:35:26.258132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:38:44.958 [2024-06-11 03:35:26.258143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:38:44.958 [2024-06-11 03:35:26.258152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:38:44.958 [2024-06-11 03:35:26.258160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:38:44.958 ===================================================== 00:38:44.958 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:38:44.958 ===================================================== 00:38:44.958 Controller Capabilities/Features 00:38:44.958 ================================ 00:38:44.958 Vendor ID: 4e58 00:38:44.958 Subsystem Vendor ID: 4e58 00:38:44.958 Serial Number: SPDK1 00:38:44.958 Model Number: SPDK bdev Controller 00:38:44.958 Firmware Version: 24.09 00:38:44.958 Recommended Arb Burst: 6 00:38:44.958 IEEE OUI Identifier: 8d 6b 50 00:38:44.958 Multi-path I/O 00:38:44.958 May have multiple subsystem ports: Yes 00:38:44.958 May have multiple controllers: Yes 00:38:44.958 Associated with SR-IOV VF: No 00:38:44.958 Max Data Transfer Size: 131072 00:38:44.958 Max Number of Namespaces: 32 00:38:44.958 Max Number of I/O Queues: 127 00:38:44.958 NVMe Specification Version (VS): 1.3 00:38:44.958 NVMe Specification Version (Identify): 1.3 00:38:44.958 Maximum Queue Entries: 256 00:38:44.958 Contiguous Queues Required: Yes 00:38:44.958 Arbitration Mechanisms Supported 00:38:44.958 Weighted Round Robin: Not Supported 00:38:44.958 Vendor Specific: Not Supported 00:38:44.958 Reset Timeout: 15000 ms 00:38:44.958 Doorbell Stride: 4 bytes 00:38:44.958 NVM Subsystem Reset: Not Supported 00:38:44.958 Command Sets Supported 00:38:44.958 NVM Command Set: Supported 00:38:44.958 Boot Partition: Not Supported 00:38:44.958 Memory Page Size Minimum: 4096 bytes 00:38:44.958 Memory Page Size Maximum: 4096 bytes 00:38:44.958 Persistent Memory Region: Not Supported 00:38:44.958 Optional Asynchronous Events Supported 00:38:44.958 Namespace Attribute Notices: Supported 00:38:44.958 Firmware Activation Notices: Not Supported 00:38:44.958 ANA Change Notices: Not Supported 00:38:44.958 PLE Aggregate Log Change Notices: Not Supported 00:38:44.958 LBA Status Info Alert Notices: Not Supported 00:38:44.958 EGE Aggregate Log Change Notices: Not Supported 00:38:44.958 Normal NVM Subsystem Shutdown event: Not Supported 00:38:44.958 Zone Descriptor Change Notices: Not Supported 00:38:44.958 Discovery Log Change Notices: Not Supported 00:38:44.958 Controller Attributes 00:38:44.958 128-bit Host Identifier: Supported 00:38:44.958 Non-Operational Permissive Mode: Not Supported 00:38:44.958 NVM Sets: Not Supported 00:38:44.958 Read Recovery Levels: Not Supported 00:38:44.958 Endurance Groups: Not Supported 00:38:44.958 Predictable Latency Mode: Not Supported 00:38:44.958 Traffic Based Keep ALive: Not Supported 00:38:44.958 Namespace Granularity: Not Supported 00:38:44.958 SQ Associations: Not Supported 00:38:44.958 UUID List: Not Supported 00:38:44.958 Multi-Domain Subsystem: Not Supported 00:38:44.958 Fixed Capacity Management: Not Supported 00:38:44.958 Variable Capacity Management: Not Supported 00:38:44.958 Delete Endurance Group: Not Supported 00:38:44.958 Delete NVM Set: Not Supported 00:38:44.958 Extended LBA Formats Supported: Not Supported 00:38:44.958 Flexible Data Placement Supported: Not Supported 00:38:44.958 00:38:44.958 Controller Memory Buffer Support 00:38:44.958 ================================ 00:38:44.958 Supported: No 00:38:44.958 00:38:44.958 Persistent Memory Region Support 00:38:44.958 ================================ 00:38:44.958 Supported: No 00:38:44.958 00:38:44.958 Admin Command Set Attributes 00:38:44.958 ============================ 00:38:44.958 Security Send/Receive: Not Supported 00:38:44.958 Format NVM: Not Supported 00:38:44.958 Firmware Activate/Download: Not Supported 00:38:44.958 Namespace Management: Not Supported 00:38:44.958 Device Self-Test: Not Supported 00:38:44.958 Directives: Not Supported 00:38:44.958 NVMe-MI: Not Supported 00:38:44.958 Virtualization Management: Not Supported 00:38:44.958 Doorbell Buffer Config: Not Supported 00:38:44.958 Get LBA Status Capability: Not Supported 00:38:44.958 Command & Feature Lockdown Capability: Not Supported 00:38:44.958 Abort Command Limit: 4 00:38:44.958 Async Event Request Limit: 4 00:38:44.958 Number of Firmware Slots: N/A 00:38:44.958 Firmware Slot 1 Read-Only: N/A 00:38:44.958 Firmware Activation Without Reset: N/A 00:38:44.958 Multiple Update Detection Support: N/A 00:38:44.958 Firmware Update Granularity: No Information Provided 00:38:44.958 Per-Namespace SMART Log: No 00:38:44.958 Asymmetric Namespace Access Log Page: Not Supported 00:38:44.958 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:38:44.958 Command Effects Log Page: Supported 00:38:44.958 Get Log Page Extended Data: Supported 00:38:44.958 Telemetry Log Pages: Not Supported 00:38:44.958 Persistent Event Log Pages: Not Supported 00:38:44.959 Supported Log Pages Log Page: May Support 00:38:44.959 Commands Supported & Effects Log Page: Not Supported 00:38:44.959 Feature Identifiers & Effects Log Page:May Support 00:38:44.959 NVMe-MI Commands & Effects Log Page: May Support 00:38:44.959 Data Area 4 for Telemetry Log: Not Supported 00:38:44.959 Error Log Page Entries Supported: 128 00:38:44.959 Keep Alive: Supported 00:38:44.959 Keep Alive Granularity: 10000 ms 00:38:44.959 00:38:44.959 NVM Command Set Attributes 00:38:44.959 ========================== 00:38:44.959 Submission Queue Entry Size 00:38:44.959 Max: 64 00:38:44.959 Min: 64 00:38:44.959 Completion Queue Entry Size 00:38:44.959 Max: 16 00:38:44.959 Min: 16 00:38:44.959 Number of Namespaces: 32 00:38:44.959 Compare Command: Supported 00:38:44.959 Write Uncorrectable Command: Not Supported 00:38:44.959 Dataset Management Command: Supported 00:38:44.959 Write Zeroes Command: Supported 00:38:44.959 Set Features Save Field: Not Supported 00:38:44.959 Reservations: Not Supported 00:38:44.959 Timestamp: Not Supported 00:38:44.959 Copy: Supported 00:38:44.959 Volatile Write Cache: Present 00:38:44.959 Atomic Write Unit (Normal): 1 00:38:44.959 Atomic Write Unit (PFail): 1 00:38:44.959 Atomic Compare & Write Unit: 1 00:38:44.959 Fused Compare & Write: Supported 00:38:44.959 Scatter-Gather List 00:38:44.959 SGL Command Set: Supported (Dword aligned) 00:38:44.959 SGL Keyed: Not Supported 00:38:44.959 SGL Bit Bucket Descriptor: Not Supported 00:38:44.959 SGL Metadata Pointer: Not Supported 00:38:44.959 Oversized SGL: Not Supported 00:38:44.959 SGL Metadata Address: Not Supported 00:38:44.959 SGL Offset: Not Supported 00:38:44.959 Transport SGL Data Block: Not Supported 00:38:44.959 Replay Protected Memory Block: Not Supported 00:38:44.959 00:38:44.959 Firmware Slot Information 00:38:44.959 ========================= 00:38:44.959 Active slot: 1 00:38:44.959 Slot 1 Firmware Revision: 24.09 00:38:44.959 00:38:44.959 00:38:44.959 Commands Supported and Effects 00:38:44.959 ============================== 00:38:44.959 Admin Commands 00:38:44.959 -------------- 00:38:44.959 Get Log Page (02h): Supported 00:38:44.959 Identify (06h): Supported 00:38:44.959 Abort (08h): Supported 00:38:44.959 Set Features (09h): Supported 00:38:44.959 Get Features (0Ah): Supported 00:38:44.959 Asynchronous Event Request (0Ch): Supported 00:38:44.959 Keep Alive (18h): Supported 00:38:44.959 I/O Commands 00:38:44.959 ------------ 00:38:44.959 Flush (00h): Supported LBA-Change 00:38:44.959 Write (01h): Supported LBA-Change 00:38:44.959 Read (02h): Supported 00:38:44.959 Compare (05h): Supported 00:38:44.959 Write Zeroes (08h): Supported LBA-Change 00:38:44.959 Dataset Management (09h): Supported LBA-Change 00:38:44.959 Copy (19h): Supported LBA-Change 00:38:44.959 Unknown (79h): Supported LBA-Change 00:38:44.959 Unknown (7Ah): Supported 00:38:44.959 00:38:44.959 Error Log 00:38:44.959 ========= 00:38:44.959 00:38:44.959 Arbitration 00:38:44.959 =========== 00:38:44.959 Arbitration Burst: 1 00:38:44.959 00:38:44.959 Power Management 00:38:44.959 ================ 00:38:44.959 Number of Power States: 1 00:38:44.959 Current Power State: Power State #0 00:38:44.959 Power State #0: 00:38:44.959 Max Power: 0.00 W 00:38:44.959 Non-Operational State: Operational 00:38:44.959 Entry Latency: Not Reported 00:38:44.959 Exit Latency: Not Reported 00:38:44.959 Relative Read Throughput: 0 00:38:44.959 Relative Read Latency: 0 00:38:44.959 Relative Write Throughput: 0 00:38:44.959 Relative Write Latency: 0 00:38:44.959 Idle Power: Not Reported 00:38:44.959 Active Power: Not Reported 00:38:44.959 Non-Operational Permissive Mode: Not Supported 00:38:44.959 00:38:44.959 Health Information 00:38:44.959 ================== 00:38:44.959 Critical Warnings: 00:38:44.959 Available Spare Space: OK 00:38:44.959 Temperature: OK 00:38:44.959 Device Reliability: OK 00:38:44.959 Read Only: No 00:38:44.959 Volatile Memory Backup: OK 00:38:44.959 Current Temperature: 0 Kelvin (-2[2024-06-11 03:35:26.258249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:38:44.959 [2024-06-11 03:35:26.258256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:38:44.959 [2024-06-11 03:35:26.258278] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:38:44.959 [2024-06-11 03:35:26.258286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:44.959 [2024-06-11 03:35:26.258292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:44.959 [2024-06-11 03:35:26.258297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:44.959 [2024-06-11 03:35:26.258302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:44.959 [2024-06-11 03:35:26.258485] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:38:44.959 [2024-06-11 03:35:26.258495] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:38:44.959 [2024-06-11 03:35:26.259486] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:38:44.959 [2024-06-11 03:35:26.259532] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:38:44.959 [2024-06-11 03:35:26.259538] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:38:44.959 [2024-06-11 03:35:26.260495] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:38:44.959 [2024-06-11 03:35:26.260505] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:38:44.959 [2024-06-11 03:35:26.260553] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:38:44.959 [2024-06-11 03:35:26.261518] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:38:44.959 73 Celsius) 00:38:44.959 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:38:44.959 Available Spare: 0% 00:38:44.959 Available Spare Threshold: 0% 00:38:44.959 Life Percentage Used: 0% 00:38:44.959 Data Units Read: 0 00:38:44.959 Data Units Written: 0 00:38:44.959 Host Read Commands: 0 00:38:44.959 Host Write Commands: 0 00:38:44.959 Controller Busy Time: 0 minutes 00:38:44.959 Power Cycles: 0 00:38:44.959 Power On Hours: 0 hours 00:38:44.959 Unsafe Shutdowns: 0 00:38:44.959 Unrecoverable Media Errors: 0 00:38:44.959 Lifetime Error Log Entries: 0 00:38:44.959 Warning Temperature Time: 0 minutes 00:38:44.959 Critical Temperature Time: 0 minutes 00:38:44.959 00:38:44.959 Number of Queues 00:38:44.959 ================ 00:38:44.959 Number of I/O Submission Queues: 127 00:38:44.959 Number of I/O Completion Queues: 127 00:38:44.959 00:38:44.959 Active Namespaces 00:38:44.959 ================= 00:38:44.959 Namespace ID:1 00:38:44.959 Error Recovery Timeout: Unlimited 00:38:44.959 Command Set Identifier: NVM (00h) 00:38:44.959 Deallocate: Supported 00:38:44.959 Deallocated/Unwritten Error: Not Supported 00:38:44.959 Deallocated Read Value: Unknown 00:38:44.959 Deallocate in Write Zeroes: Not Supported 00:38:44.959 Deallocated Guard Field: 0xFFFF 00:38:44.959 Flush: Supported 00:38:44.959 Reservation: Supported 00:38:44.959 Namespace Sharing Capabilities: Multiple Controllers 00:38:44.959 Size (in LBAs): 131072 (0GiB) 00:38:44.959 Capacity (in LBAs): 131072 (0GiB) 00:38:44.960 Utilization (in LBAs): 131072 (0GiB) 00:38:44.960 NGUID: 323972873CD84EE48EB1C539F0F04B1F 00:38:44.960 UUID: 32397287-3cd8-4ee4-8eb1-c539f0f04b1f 00:38:44.960 Thin Provisioning: Not Supported 00:38:44.960 Per-NS Atomic Units: Yes 00:38:44.960 Atomic Boundary Size (Normal): 0 00:38:44.960 Atomic Boundary Size (PFail): 0 00:38:44.960 Atomic Boundary Offset: 0 00:38:44.960 Maximum Single Source Range Length: 65535 00:38:44.960 Maximum Copy Length: 65535 00:38:44.960 Maximum Source Range Count: 1 00:38:44.960 NGUID/EUI64 Never Reused: No 00:38:44.960 Namespace Write Protected: No 00:38:44.960 Number of LBA Formats: 1 00:38:44.960 Current LBA Format: LBA Format #00 00:38:44.960 LBA Format #00: Data Size: 512 Metadata Size: 0 00:38:44.960 00:38:44.960 03:35:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:38:44.960 EAL: No free 2048 kB hugepages reported on node 1 00:38:45.218 [2024-06-11 03:35:26.477856] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:38:50.487 Initializing NVMe Controllers 00:38:50.487 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:38:50.487 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:38:50.487 Initialization complete. Launching workers. 00:38:50.487 ======================================================== 00:38:50.487 Latency(us) 00:38:50.487 Device Information : IOPS MiB/s Average min max 00:38:50.487 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39939.43 156.01 3204.68 948.98 7633.30 00:38:50.487 ======================================================== 00:38:50.487 Total : 39939.43 156.01 3204.68 948.98 7633.30 00:38:50.487 00:38:50.487 [2024-06-11 03:35:31.498295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:38:50.487 03:35:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:38:50.487 EAL: No free 2048 kB hugepages reported on node 1 00:38:50.487 [2024-06-11 03:35:31.715324] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:38:55.751 Initializing NVMe Controllers 00:38:55.751 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:38:55.751 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:38:55.751 Initialization complete. Launching workers. 00:38:55.751 ======================================================== 00:38:55.751 Latency(us) 00:38:55.751 Device Information : IOPS MiB/s Average min max 00:38:55.751 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16039.10 62.65 7979.85 5991.18 15416.39 00:38:55.751 ======================================================== 00:38:55.751 Total : 16039.10 62.65 7979.85 5991.18 15416.39 00:38:55.751 00:38:55.751 [2024-06-11 03:35:36.750177] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:38:55.751 03:35:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:38:55.751 EAL: No free 2048 kB hugepages reported on node 1 00:38:55.751 [2024-06-11 03:35:36.942100] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:39:01.019 [2024-06-11 03:35:42.012316] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:39:01.019 Initializing NVMe Controllers 00:39:01.019 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:39:01.019 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:39:01.019 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:39:01.019 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:39:01.019 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:39:01.019 Initialization complete. Launching workers. 00:39:01.019 Starting thread on core 2 00:39:01.019 Starting thread on core 3 00:39:01.019 Starting thread on core 1 00:39:01.019 03:35:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:39:01.019 EAL: No free 2048 kB hugepages reported on node 1 00:39:01.019 [2024-06-11 03:35:42.289416] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:39:04.300 [2024-06-11 03:35:45.350807] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:39:04.300 Initializing NVMe Controllers 00:39:04.300 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:39:04.300 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:39:04.300 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:39:04.300 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:39:04.300 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:39:04.300 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:39:04.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:39:04.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:39:04.300 Initialization complete. Launching workers. 00:39:04.300 Starting thread on core 1 with urgent priority queue 00:39:04.300 Starting thread on core 2 with urgent priority queue 00:39:04.300 Starting thread on core 3 with urgent priority queue 00:39:04.300 Starting thread on core 0 with urgent priority queue 00:39:04.300 SPDK bdev Controller (SPDK1 ) core 0: 7060.00 IO/s 14.16 secs/100000 ios 00:39:04.300 SPDK bdev Controller (SPDK1 ) core 1: 8994.33 IO/s 11.12 secs/100000 ios 00:39:04.300 SPDK bdev Controller (SPDK1 ) core 2: 9624.00 IO/s 10.39 secs/100000 ios 00:39:04.300 SPDK bdev Controller (SPDK1 ) core 3: 8183.00 IO/s 12.22 secs/100000 ios 00:39:04.300 ======================================================== 00:39:04.300 00:39:04.300 03:35:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:39:04.300 EAL: No free 2048 kB hugepages reported on node 1 00:39:04.300 [2024-06-11 03:35:45.618277] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:39:04.300 Initializing NVMe Controllers 00:39:04.300 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:39:04.300 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:39:04.300 Namespace ID: 1 size: 0GB 00:39:04.300 Initialization complete. 00:39:04.300 INFO: using host memory buffer for IO 00:39:04.300 Hello world! 00:39:04.300 [2024-06-11 03:35:45.654511] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:39:04.300 03:35:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:39:04.558 EAL: No free 2048 kB hugepages reported on node 1 00:39:04.558 [2024-06-11 03:35:45.921958] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:39:05.935 Initializing NVMe Controllers 00:39:05.935 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:39:05.935 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:39:05.935 Initialization complete. Launching workers. 00:39:05.935 submit (in ns) avg, min, max = 7218.8, 3152.4, 3999615.2 00:39:05.935 complete (in ns) avg, min, max = 18185.8, 1722.9, 4017126.7 00:39:05.935 00:39:05.935 Submit histogram 00:39:05.935 ================ 00:39:05.935 Range in us Cumulative Count 00:39:05.935 3.139 - 3.154: 0.0059% ( 1) 00:39:05.935 3.154 - 3.170: 0.0238% ( 3) 00:39:05.935 3.170 - 3.185: 0.0416% ( 3) 00:39:05.935 3.185 - 3.200: 0.0713% ( 5) 00:39:05.935 3.200 - 3.215: 0.1366% ( 11) 00:39:05.935 3.215 - 3.230: 0.4634% ( 55) 00:39:05.935 3.230 - 3.246: 1.3782% ( 154) 00:39:05.935 3.246 - 3.261: 2.8931% ( 255) 00:39:05.935 3.261 - 3.276: 5.7387% ( 479) 00:39:05.935 3.276 - 3.291: 10.5032% ( 802) 00:39:05.935 3.291 - 3.307: 16.0756% ( 938) 00:39:05.935 3.307 - 3.322: 21.9212% ( 984) 00:39:05.935 3.322 - 3.337: 28.4025% ( 1091) 00:39:05.935 3.337 - 3.352: 34.5215% ( 1030) 00:39:05.935 3.352 - 3.368: 39.7137% ( 874) 00:39:05.935 3.368 - 3.383: 45.3217% ( 944) 00:39:05.935 3.383 - 3.398: 51.5654% ( 1051) 00:39:05.935 3.398 - 3.413: 56.7576% ( 874) 00:39:05.935 3.413 - 3.429: 63.4706% ( 1130) 00:39:05.935 3.429 - 3.444: 69.5123% ( 1017) 00:39:05.935 3.444 - 3.459: 74.7401% ( 880) 00:39:05.935 3.459 - 3.474: 79.5818% ( 815) 00:39:05.935 3.474 - 3.490: 82.8373% ( 548) 00:39:05.935 3.490 - 3.505: 85.2255% ( 402) 00:39:05.935 3.505 - 3.520: 86.5621% ( 225) 00:39:05.935 3.520 - 3.535: 87.4057% ( 142) 00:39:05.935 3.535 - 3.550: 87.9760% ( 96) 00:39:05.935 3.550 - 3.566: 88.5166% ( 91) 00:39:05.935 3.566 - 3.581: 89.1463% ( 106) 00:39:05.935 3.581 - 3.596: 89.9305% ( 132) 00:39:05.935 3.596 - 3.611: 90.7741% ( 142) 00:39:05.935 3.611 - 3.627: 91.6949% ( 155) 00:39:05.935 3.627 - 3.642: 92.8117% ( 188) 00:39:05.935 3.642 - 3.657: 93.7801% ( 163) 00:39:05.935 3.657 - 3.672: 94.7544% ( 164) 00:39:05.935 3.672 - 3.688: 95.7227% ( 163) 00:39:05.935 3.688 - 3.703: 96.5782% ( 144) 00:39:05.935 3.703 - 3.718: 97.2079% ( 106) 00:39:05.935 3.718 - 3.733: 97.8613% ( 110) 00:39:05.935 3.733 - 3.749: 98.4257% ( 95) 00:39:05.935 3.749 - 3.764: 98.8178% ( 66) 00:39:05.935 3.764 - 3.779: 99.0376% ( 37) 00:39:05.935 3.779 - 3.794: 99.1861% ( 25) 00:39:05.935 3.794 - 3.810: 99.3762% ( 32) 00:39:05.935 3.810 - 3.825: 99.5010% ( 21) 00:39:05.935 3.825 - 3.840: 99.5901% ( 15) 00:39:05.935 3.840 - 3.855: 99.6139% ( 4) 00:39:05.935 3.855 - 3.870: 99.6198% ( 1) 00:39:05.935 3.886 - 3.901: 99.6317% ( 2) 00:39:05.935 3.901 - 3.931: 99.6436% ( 2) 00:39:05.935 3.931 - 3.962: 99.6495% ( 1) 00:39:05.935 4.389 - 4.419: 99.6554% ( 1) 00:39:05.935 5.790 - 5.821: 99.6673% ( 2) 00:39:05.935 5.912 - 5.943: 99.6733% ( 1) 00:39:05.935 5.943 - 5.973: 99.6792% ( 1) 00:39:05.935 6.004 - 6.034: 99.6851% ( 1) 00:39:05.935 6.126 - 6.156: 99.7089% ( 4) 00:39:05.935 6.156 - 6.187: 99.7148% ( 1) 00:39:05.935 6.278 - 6.309: 99.7267% ( 2) 00:39:05.935 6.309 - 6.339: 99.7327% ( 1) 00:39:05.935 6.370 - 6.400: 99.7386% ( 1) 00:39:05.935 6.430 - 6.461: 99.7445% ( 1) 00:39:05.935 6.674 - 6.705: 99.7505% ( 1) 00:39:05.935 6.705 - 6.735: 99.7564% ( 1) 00:39:05.935 6.796 - 6.827: 99.7624% ( 1) 00:39:05.935 6.827 - 6.857: 99.7683% ( 1) 00:39:05.935 6.888 - 6.918: 99.7743% ( 1) 00:39:05.935 7.010 - 7.040: 99.7802% ( 1) 00:39:05.935 7.070 - 7.101: 99.7861% ( 1) 00:39:05.935 7.162 - 7.192: 99.7980% ( 2) 00:39:05.935 7.192 - 7.223: 99.8040% ( 1) 00:39:05.935 7.223 - 7.253: 99.8099% ( 1) 00:39:05.935 7.284 - 7.314: 99.8158% ( 1) 00:39:05.935 7.406 - 7.436: 99.8218% ( 1) 00:39:05.935 7.497 - 7.528: 99.8277% ( 1) 00:39:05.935 7.558 - 7.589: 99.8396% ( 2) 00:39:05.935 7.589 - 7.619: 99.8455% ( 1) 00:39:05.935 7.680 - 7.710: 99.8515% ( 1) 00:39:05.935 7.741 - 7.771: 99.8574% ( 1) 00:39:05.935 [2024-06-11 03:35:46.942886] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:39:05.935 7.771 - 7.802: 99.8634% ( 1) 00:39:05.935 8.168 - 8.229: 99.8871% ( 4) 00:39:05.935 8.411 - 8.472: 99.8931% ( 1) 00:39:05.935 8.655 - 8.716: 99.8990% ( 1) 00:39:05.935 13.531 - 13.592: 99.9049% ( 1) 00:39:05.935 3994.575 - 4025.783: 100.0000% ( 16) 00:39:05.935 00:39:05.935 Complete histogram 00:39:05.935 ================== 00:39:05.935 Range in us Cumulative Count 00:39:05.935 1.722 - 1.730: 0.0119% ( 2) 00:39:05.935 1.730 - 1.737: 0.1069% ( 16) 00:39:05.935 1.737 - 1.745: 0.3446% ( 40) 00:39:05.935 1.745 - 1.752: 0.4931% ( 25) 00:39:05.935 1.752 - 1.760: 0.5228% ( 5) 00:39:05.935 1.760 - 1.768: 0.5347% ( 2) 00:39:05.935 1.768 - 1.775: 0.7545% ( 37) 00:39:05.935 1.775 - 1.783: 4.3070% ( 598) 00:39:05.935 1.783 - 1.790: 24.5470% ( 3407) 00:39:05.935 1.790 - 1.798: 56.9120% ( 5448) 00:39:05.935 1.798 - 1.806: 73.9619% ( 2870) 00:39:05.935 1.806 - 1.813: 78.6669% ( 792) 00:39:05.935 1.813 - 1.821: 82.4927% ( 644) 00:39:05.935 1.821 - 1.829: 86.7285% ( 713) 00:39:05.935 1.829 - 1.836: 90.2454% ( 592) 00:39:05.935 1.836 - 1.844: 93.1919% ( 496) 00:39:05.935 1.844 - 1.851: 95.3009% ( 355) 00:39:05.935 1.851 - 1.859: 96.8098% ( 254) 00:39:05.935 1.859 - 1.867: 97.7544% ( 159) 00:39:05.935 1.867 - 1.874: 98.4019% ( 109) 00:39:05.935 1.874 - 1.882: 98.7822% ( 64) 00:39:05.935 1.882 - 1.890: 98.9960% ( 36) 00:39:05.935 1.890 - 1.897: 99.0970% ( 17) 00:39:05.935 1.897 - 1.905: 99.1624% ( 11) 00:39:05.935 1.905 - 1.912: 99.2099% ( 8) 00:39:05.935 1.912 - 1.920: 99.2396% ( 5) 00:39:05.935 1.920 - 1.928: 99.2634% ( 4) 00:39:05.935 1.928 - 1.935: 99.2693% ( 1) 00:39:05.936 1.935 - 1.943: 99.2990% ( 5) 00:39:05.936 1.943 - 1.950: 99.3109% ( 2) 00:39:05.936 1.950 - 1.966: 99.3525% ( 7) 00:39:05.936 1.966 - 1.981: 99.3822% ( 5) 00:39:05.936 1.981 - 1.996: 99.3940% ( 2) 00:39:05.936 1.996 - 2.011: 99.4000% ( 1) 00:39:05.936 2.042 - 2.057: 99.4059% ( 1) 00:39:05.936 2.072 - 2.088: 99.4178% ( 2) 00:39:05.936 2.088 - 2.103: 99.4238% ( 1) 00:39:05.936 2.118 - 2.133: 99.4297% ( 1) 00:39:05.936 2.194 - 2.210: 99.4416% ( 2) 00:39:05.936 4.328 - 4.358: 99.4475% ( 1) 00:39:05.936 4.419 - 4.450: 99.4535% ( 1) 00:39:05.936 4.480 - 4.510: 99.4594% ( 1) 00:39:05.936 4.602 - 4.632: 99.4653% ( 1) 00:39:05.936 4.663 - 4.693: 99.4772% ( 2) 00:39:05.936 4.724 - 4.754: 99.4832% ( 1) 00:39:05.936 4.907 - 4.937: 99.4891% ( 1) 00:39:05.936 4.937 - 4.968: 99.5010% ( 2) 00:39:05.936 5.150 - 5.181: 99.5069% ( 1) 00:39:05.936 5.303 - 5.333: 99.5188% ( 2) 00:39:05.936 5.425 - 5.455: 99.5247% ( 1) 00:39:05.936 5.851 - 5.882: 99.5307% ( 1) 00:39:05.936 5.943 - 5.973: 99.5366% ( 1) 00:39:05.936 6.278 - 6.309: 99.5426% ( 1) 00:39:05.936 6.674 - 6.705: 99.5544% ( 2) 00:39:05.936 6.705 - 6.735: 99.5604% ( 1) 00:39:05.936 7.436 - 7.467: 99.5663% ( 1) 00:39:05.936 7.619 - 7.650: 99.5723% ( 1) 00:39:05.936 9.021 - 9.082: 99.5782% ( 1) 00:39:05.936 10.179 - 10.240: 99.5842% ( 1) 00:39:05.936 12.190 - 12.251: 99.5901% ( 1) 00:39:05.936 3994.575 - 4025.783: 100.0000% ( 69) 00:39:05.936 00:39:05.936 03:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:39:05.936 03:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:39:05.936 03:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:39:05.936 03:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:39:05.936 03:35:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:39:05.936 [ 00:39:05.936 { 00:39:05.936 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:39:05.936 "subtype": "Discovery", 00:39:05.936 "listen_addresses": [], 00:39:05.936 "allow_any_host": true, 00:39:05.936 "hosts": [] 00:39:05.936 }, 00:39:05.936 { 00:39:05.936 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:39:05.936 "subtype": "NVMe", 00:39:05.936 "listen_addresses": [ 00:39:05.936 { 00:39:05.936 "trtype": "VFIOUSER", 00:39:05.936 "adrfam": "IPv4", 00:39:05.936 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:39:05.936 "trsvcid": "0" 00:39:05.936 } 00:39:05.936 ], 00:39:05.936 "allow_any_host": true, 00:39:05.936 "hosts": [], 00:39:05.936 "serial_number": "SPDK1", 00:39:05.936 "model_number": "SPDK bdev Controller", 00:39:05.936 "max_namespaces": 32, 00:39:05.936 "min_cntlid": 1, 00:39:05.936 "max_cntlid": 65519, 00:39:05.936 "namespaces": [ 00:39:05.936 { 00:39:05.936 "nsid": 1, 00:39:05.936 "bdev_name": "Malloc1", 00:39:05.936 "name": "Malloc1", 00:39:05.936 "nguid": "323972873CD84EE48EB1C539F0F04B1F", 00:39:05.936 "uuid": "32397287-3cd8-4ee4-8eb1-c539f0f04b1f" 00:39:05.936 } 00:39:05.936 ] 00:39:05.936 }, 00:39:05.936 { 00:39:05.936 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:39:05.936 "subtype": "NVMe", 00:39:05.936 "listen_addresses": [ 00:39:05.936 { 00:39:05.936 "trtype": "VFIOUSER", 00:39:05.936 "adrfam": "IPv4", 00:39:05.936 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:39:05.936 "trsvcid": "0" 00:39:05.936 } 00:39:05.936 ], 00:39:05.936 "allow_any_host": true, 00:39:05.936 "hosts": [], 00:39:05.936 "serial_number": "SPDK2", 00:39:05.936 "model_number": "SPDK bdev Controller", 00:39:05.936 "max_namespaces": 32, 00:39:05.936 "min_cntlid": 1, 00:39:05.936 "max_cntlid": 65519, 00:39:05.936 "namespaces": [ 00:39:05.936 { 00:39:05.936 "nsid": 1, 00:39:05.936 "bdev_name": "Malloc2", 00:39:05.936 "name": "Malloc2", 00:39:05.936 "nguid": "91E9CCF0C1F743C59654F74979F212E4", 00:39:05.936 "uuid": "91e9ccf0-c1f7-43c5-9654-f74979f212e4" 00:39:05.936 } 00:39:05.936 ] 00:39:05.936 } 00:39:05.936 ] 00:39:05.936 03:35:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:39:05.936 03:35:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:39:05.936 03:35:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2120802 00:39:05.936 03:35:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:39:05.936 03:35:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:39:05.936 03:35:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:39:05.936 03:35:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:39:05.936 03:35:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:39:05.936 03:35:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:39:05.936 03:35:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:39:05.936 EAL: No free 2048 kB hugepages reported on node 1 00:39:05.936 [2024-06-11 03:35:47.303514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:39:06.196 Malloc3 00:39:06.196 03:35:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:39:06.196 [2024-06-11 03:35:47.535174] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:39:06.196 03:35:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:39:06.196 Asynchronous Event Request test 00:39:06.196 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:39:06.196 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:39:06.196 Registering asynchronous event callbacks... 00:39:06.196 Starting namespace attribute notice tests for all controllers... 00:39:06.196 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:39:06.196 aer_cb - Changed Namespace 00:39:06.196 Cleaning up... 00:39:06.457 [ 00:39:06.457 { 00:39:06.457 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:39:06.457 "subtype": "Discovery", 00:39:06.457 "listen_addresses": [], 00:39:06.457 "allow_any_host": true, 00:39:06.457 "hosts": [] 00:39:06.457 }, 00:39:06.457 { 00:39:06.457 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:39:06.457 "subtype": "NVMe", 00:39:06.457 "listen_addresses": [ 00:39:06.457 { 00:39:06.457 "trtype": "VFIOUSER", 00:39:06.457 "adrfam": "IPv4", 00:39:06.457 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:39:06.457 "trsvcid": "0" 00:39:06.457 } 00:39:06.457 ], 00:39:06.457 "allow_any_host": true, 00:39:06.457 "hosts": [], 00:39:06.457 "serial_number": "SPDK1", 00:39:06.457 "model_number": "SPDK bdev Controller", 00:39:06.457 "max_namespaces": 32, 00:39:06.457 "min_cntlid": 1, 00:39:06.457 "max_cntlid": 65519, 00:39:06.457 "namespaces": [ 00:39:06.457 { 00:39:06.457 "nsid": 1, 00:39:06.457 "bdev_name": "Malloc1", 00:39:06.457 "name": "Malloc1", 00:39:06.457 "nguid": "323972873CD84EE48EB1C539F0F04B1F", 00:39:06.457 "uuid": "32397287-3cd8-4ee4-8eb1-c539f0f04b1f" 00:39:06.457 }, 00:39:06.457 { 00:39:06.457 "nsid": 2, 00:39:06.457 "bdev_name": "Malloc3", 00:39:06.457 "name": "Malloc3", 00:39:06.457 "nguid": "1BA9A4ED5A7B44ED963745219692C8A5", 00:39:06.457 "uuid": "1ba9a4ed-5a7b-44ed-9637-45219692c8a5" 00:39:06.457 } 00:39:06.457 ] 00:39:06.457 }, 00:39:06.457 { 00:39:06.457 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:39:06.457 "subtype": "NVMe", 00:39:06.457 "listen_addresses": [ 00:39:06.457 { 00:39:06.457 "trtype": "VFIOUSER", 00:39:06.457 "adrfam": "IPv4", 00:39:06.457 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:39:06.457 "trsvcid": "0" 00:39:06.457 } 00:39:06.457 ], 00:39:06.457 "allow_any_host": true, 00:39:06.457 "hosts": [], 00:39:06.457 "serial_number": "SPDK2", 00:39:06.457 "model_number": "SPDK bdev Controller", 00:39:06.457 "max_namespaces": 32, 00:39:06.457 "min_cntlid": 1, 00:39:06.457 "max_cntlid": 65519, 00:39:06.457 "namespaces": [ 00:39:06.457 { 00:39:06.457 "nsid": 1, 00:39:06.457 "bdev_name": "Malloc2", 00:39:06.457 "name": "Malloc2", 00:39:06.457 "nguid": "91E9CCF0C1F743C59654F74979F212E4", 00:39:06.457 "uuid": "91e9ccf0-c1f7-43c5-9654-f74979f212e4" 00:39:06.457 } 00:39:06.457 ] 00:39:06.457 } 00:39:06.457 ] 00:39:06.457 03:35:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2120802 00:39:06.457 03:35:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:39:06.457 03:35:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:39:06.457 03:35:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:39:06.457 03:35:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:39:06.457 [2024-06-11 03:35:47.770992] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:39:06.457 [2024-06-11 03:35:47.771044] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121028 ] 00:39:06.457 EAL: No free 2048 kB hugepages reported on node 1 00:39:06.457 [2024-06-11 03:35:47.801187] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:39:06.457 [2024-06-11 03:35:47.804894] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:39:06.457 [2024-06-11 03:35:47.804916] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f97bc767000 00:39:06.457 [2024-06-11 03:35:47.805896] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:39:06.457 [2024-06-11 03:35:47.806900] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:39:06.457 [2024-06-11 03:35:47.807912] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:39:06.457 [2024-06-11 03:35:47.808916] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:39:06.457 [2024-06-11 03:35:47.809921] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:39:06.457 [2024-06-11 03:35:47.810935] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:39:06.457 [2024-06-11 03:35:47.811946] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:39:06.457 [2024-06-11 03:35:47.812952] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:39:06.457 [2024-06-11 03:35:47.813957] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:39:06.458 [2024-06-11 03:35:47.813967] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f97bb52e000 00:39:06.458 [2024-06-11 03:35:47.814959] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:39:06.458 [2024-06-11 03:35:47.826807] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:39:06.458 [2024-06-11 03:35:47.826828] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:39:06.458 [2024-06-11 03:35:47.828879] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:39:06.458 [2024-06-11 03:35:47.828912] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:39:06.458 [2024-06-11 03:35:47.828975] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:39:06.458 [2024-06-11 03:35:47.828989] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:39:06.458 [2024-06-11 03:35:47.828994] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:39:06.458 [2024-06-11 03:35:47.829885] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:39:06.458 [2024-06-11 03:35:47.829896] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:39:06.458 [2024-06-11 03:35:47.829902] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:39:06.458 [2024-06-11 03:35:47.830890] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:39:06.458 [2024-06-11 03:35:47.830901] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:39:06.458 [2024-06-11 03:35:47.830910] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:39:06.458 [2024-06-11 03:35:47.831896] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:39:06.458 [2024-06-11 03:35:47.831903] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:39:06.458 [2024-06-11 03:35:47.832903] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:39:06.458 [2024-06-11 03:35:47.832911] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:39:06.458 [2024-06-11 03:35:47.832915] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:39:06.458 [2024-06-11 03:35:47.832921] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:39:06.458 [2024-06-11 03:35:47.833025] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:39:06.458 [2024-06-11 03:35:47.833030] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:39:06.458 [2024-06-11 03:35:47.833034] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:39:06.458 [2024-06-11 03:35:47.833913] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:39:06.458 [2024-06-11 03:35:47.834923] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:39:06.458 [2024-06-11 03:35:47.835927] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:39:06.458 [2024-06-11 03:35:47.836930] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:39:06.458 [2024-06-11 03:35:47.836966] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:39:06.458 [2024-06-11 03:35:47.837947] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:39:06.458 [2024-06-11 03:35:47.837955] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:39:06.458 [2024-06-11 03:35:47.837958] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:39:06.458 [2024-06-11 03:35:47.837975] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:39:06.458 [2024-06-11 03:35:47.837984] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:39:06.458 [2024-06-11 03:35:47.837997] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:39:06.458 [2024-06-11 03:35:47.838001] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:39:06.458 [2024-06-11 03:35:47.838016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:39:06.458 [2024-06-11 03:35:47.846017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:39:06.458 [2024-06-11 03:35:47.846028] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:39:06.458 [2024-06-11 03:35:47.846035] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:39:06.458 [2024-06-11 03:35:47.846041] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:39:06.458 [2024-06-11 03:35:47.846045] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:39:06.458 [2024-06-11 03:35:47.846049] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:39:06.458 [2024-06-11 03:35:47.846053] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:39:06.458 [2024-06-11 03:35:47.846057] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:39:06.458 [2024-06-11 03:35:47.846063] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:39:06.458 [2024-06-11 03:35:47.846072] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:39:06.458 [2024-06-11 03:35:47.854015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:39:06.458 [2024-06-11 03:35:47.854026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:39:06.458 [2024-06-11 03:35:47.854034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:39:06.458 [2024-06-11 03:35:47.854041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:39:06.458 [2024-06-11 03:35:47.854048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:39:06.458 [2024-06-11 03:35:47.854052] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:39:06.458 [2024-06-11 03:35:47.854060] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:39:06.458 [2024-06-11 03:35:47.854068] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:39:06.771 [2024-06-11 03:35:47.862017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:39:06.771 [2024-06-11 03:35:47.862024] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:39:06.771 [2024-06-11 03:35:47.862029] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:39:06.771 [2024-06-11 03:35:47.862035] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:39:06.771 [2024-06-11 03:35:47.862043] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:39:06.771 [2024-06-11 03:35:47.862051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:39:06.771 [2024-06-11 03:35:47.870015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:39:06.771 [2024-06-11 03:35:47.870059] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:39:06.771 [2024-06-11 03:35:47.870066] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:39:06.771 [2024-06-11 03:35:47.870074] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:39:06.771 [2024-06-11 03:35:47.870078] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:39:06.771 [2024-06-11 03:35:47.870084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:39:06.771 [2024-06-11 03:35:47.878016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:39:06.771 [2024-06-11 03:35:47.878025] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:39:06.771 [2024-06-11 03:35:47.878033] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:39:06.771 [2024-06-11 03:35:47.878039] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:39:06.771 [2024-06-11 03:35:47.878045] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:39:06.771 [2024-06-11 03:35:47.878049] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:39:06.771 [2024-06-11 03:35:47.878055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:39:06.771 [2024-06-11 03:35:47.886018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:39:06.771 [2024-06-11 03:35:47.886032] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:39:06.771 [2024-06-11 03:35:47.886038] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:39:06.771 [2024-06-11 03:35:47.886045] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:39:06.771 [2024-06-11 03:35:47.886048] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:39:06.771 [2024-06-11 03:35:47.886054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:39:06.771 [2024-06-11 03:35:47.894016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:39:06.771 [2024-06-11 03:35:47.894024] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:39:06.771 [2024-06-11 03:35:47.894029] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:39:06.771 [2024-06-11 03:35:47.894038] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:39:06.771 [2024-06-11 03:35:47.894044] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:39:06.771 [2024-06-11 03:35:47.894048] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:39:06.771 [2024-06-11 03:35:47.894052] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:39:06.771 [2024-06-11 03:35:47.894056] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:39:06.771 [2024-06-11 03:35:47.894060] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:39:06.771 [2024-06-11 03:35:47.894079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:39:06.771 [2024-06-11 03:35:47.902016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:39:06.771 [2024-06-11 03:35:47.902028] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:39:06.771 [2024-06-11 03:35:47.910014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:39:06.771 [2024-06-11 03:35:47.910026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:39:06.771 [2024-06-11 03:35:47.918013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:39:06.771 [2024-06-11 03:35:47.918025] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:39:06.771 [2024-06-11 03:35:47.926013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:39:06.771 [2024-06-11 03:35:47.926024] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:39:06.771 [2024-06-11 03:35:47.926028] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:39:06.771 [2024-06-11 03:35:47.926031] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:39:06.771 [2024-06-11 03:35:47.926034] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:39:06.771 [2024-06-11 03:35:47.926040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:39:06.771 [2024-06-11 03:35:47.926046] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:39:06.771 [2024-06-11 03:35:47.926049] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:39:06.771 [2024-06-11 03:35:47.926055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:39:06.771 [2024-06-11 03:35:47.926061] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:39:06.771 [2024-06-11 03:35:47.926064] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:39:06.771 [2024-06-11 03:35:47.926070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:39:06.771 [2024-06-11 03:35:47.926076] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:39:06.771 [2024-06-11 03:35:47.926079] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:39:06.771 [2024-06-11 03:35:47.926085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:39:06.771 [2024-06-11 03:35:47.934015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:39:06.771 [2024-06-11 03:35:47.934029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:39:06.771 [2024-06-11 03:35:47.934037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:39:06.771 [2024-06-11 03:35:47.934044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:39:06.771 ===================================================== 00:39:06.771 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:39:06.772 ===================================================== 00:39:06.772 Controller Capabilities/Features 00:39:06.772 ================================ 00:39:06.772 Vendor ID: 4e58 00:39:06.772 Subsystem Vendor ID: 4e58 00:39:06.772 Serial Number: SPDK2 00:39:06.772 Model Number: SPDK bdev Controller 00:39:06.772 Firmware Version: 24.09 00:39:06.772 Recommended Arb Burst: 6 00:39:06.772 IEEE OUI Identifier: 8d 6b 50 00:39:06.772 Multi-path I/O 00:39:06.772 May have multiple subsystem ports: Yes 00:39:06.772 May have multiple controllers: Yes 00:39:06.772 Associated with SR-IOV VF: No 00:39:06.772 Max Data Transfer Size: 131072 00:39:06.772 Max Number of Namespaces: 32 00:39:06.772 Max Number of I/O Queues: 127 00:39:06.772 NVMe Specification Version (VS): 1.3 00:39:06.772 NVMe Specification Version (Identify): 1.3 00:39:06.772 Maximum Queue Entries: 256 00:39:06.772 Contiguous Queues Required: Yes 00:39:06.772 Arbitration Mechanisms Supported 00:39:06.772 Weighted Round Robin: Not Supported 00:39:06.772 Vendor Specific: Not Supported 00:39:06.772 Reset Timeout: 15000 ms 00:39:06.772 Doorbell Stride: 4 bytes 00:39:06.772 NVM Subsystem Reset: Not Supported 00:39:06.772 Command Sets Supported 00:39:06.772 NVM Command Set: Supported 00:39:06.772 Boot Partition: Not Supported 00:39:06.772 Memory Page Size Minimum: 4096 bytes 00:39:06.772 Memory Page Size Maximum: 4096 bytes 00:39:06.772 Persistent Memory Region: Not Supported 00:39:06.772 Optional Asynchronous Events Supported 00:39:06.772 Namespace Attribute Notices: Supported 00:39:06.772 Firmware Activation Notices: Not Supported 00:39:06.772 ANA Change Notices: Not Supported 00:39:06.772 PLE Aggregate Log Change Notices: Not Supported 00:39:06.772 LBA Status Info Alert Notices: Not Supported 00:39:06.772 EGE Aggregate Log Change Notices: Not Supported 00:39:06.772 Normal NVM Subsystem Shutdown event: Not Supported 00:39:06.772 Zone Descriptor Change Notices: Not Supported 00:39:06.772 Discovery Log Change Notices: Not Supported 00:39:06.772 Controller Attributes 00:39:06.772 128-bit Host Identifier: Supported 00:39:06.772 Non-Operational Permissive Mode: Not Supported 00:39:06.772 NVM Sets: Not Supported 00:39:06.772 Read Recovery Levels: Not Supported 00:39:06.772 Endurance Groups: Not Supported 00:39:06.772 Predictable Latency Mode: Not Supported 00:39:06.772 Traffic Based Keep ALive: Not Supported 00:39:06.772 Namespace Granularity: Not Supported 00:39:06.772 SQ Associations: Not Supported 00:39:06.772 UUID List: Not Supported 00:39:06.772 Multi-Domain Subsystem: Not Supported 00:39:06.772 Fixed Capacity Management: Not Supported 00:39:06.772 Variable Capacity Management: Not Supported 00:39:06.772 Delete Endurance Group: Not Supported 00:39:06.772 Delete NVM Set: Not Supported 00:39:06.772 Extended LBA Formats Supported: Not Supported 00:39:06.772 Flexible Data Placement Supported: Not Supported 00:39:06.772 00:39:06.772 Controller Memory Buffer Support 00:39:06.772 ================================ 00:39:06.772 Supported: No 00:39:06.772 00:39:06.772 Persistent Memory Region Support 00:39:06.772 ================================ 00:39:06.772 Supported: No 00:39:06.772 00:39:06.772 Admin Command Set Attributes 00:39:06.772 ============================ 00:39:06.772 Security Send/Receive: Not Supported 00:39:06.772 Format NVM: Not Supported 00:39:06.772 Firmware Activate/Download: Not Supported 00:39:06.772 Namespace Management: Not Supported 00:39:06.772 Device Self-Test: Not Supported 00:39:06.772 Directives: Not Supported 00:39:06.772 NVMe-MI: Not Supported 00:39:06.772 Virtualization Management: Not Supported 00:39:06.772 Doorbell Buffer Config: Not Supported 00:39:06.772 Get LBA Status Capability: Not Supported 00:39:06.772 Command & Feature Lockdown Capability: Not Supported 00:39:06.772 Abort Command Limit: 4 00:39:06.772 Async Event Request Limit: 4 00:39:06.772 Number of Firmware Slots: N/A 00:39:06.772 Firmware Slot 1 Read-Only: N/A 00:39:06.772 Firmware Activation Without Reset: N/A 00:39:06.772 Multiple Update Detection Support: N/A 00:39:06.772 Firmware Update Granularity: No Information Provided 00:39:06.772 Per-Namespace SMART Log: No 00:39:06.772 Asymmetric Namespace Access Log Page: Not Supported 00:39:06.772 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:39:06.772 Command Effects Log Page: Supported 00:39:06.772 Get Log Page Extended Data: Supported 00:39:06.772 Telemetry Log Pages: Not Supported 00:39:06.772 Persistent Event Log Pages: Not Supported 00:39:06.772 Supported Log Pages Log Page: May Support 00:39:06.772 Commands Supported & Effects Log Page: Not Supported 00:39:06.772 Feature Identifiers & Effects Log Page:May Support 00:39:06.772 NVMe-MI Commands & Effects Log Page: May Support 00:39:06.772 Data Area 4 for Telemetry Log: Not Supported 00:39:06.772 Error Log Page Entries Supported: 128 00:39:06.772 Keep Alive: Supported 00:39:06.772 Keep Alive Granularity: 10000 ms 00:39:06.772 00:39:06.772 NVM Command Set Attributes 00:39:06.772 ========================== 00:39:06.772 Submission Queue Entry Size 00:39:06.772 Max: 64 00:39:06.772 Min: 64 00:39:06.772 Completion Queue Entry Size 00:39:06.772 Max: 16 00:39:06.772 Min: 16 00:39:06.772 Number of Namespaces: 32 00:39:06.772 Compare Command: Supported 00:39:06.772 Write Uncorrectable Command: Not Supported 00:39:06.772 Dataset Management Command: Supported 00:39:06.772 Write Zeroes Command: Supported 00:39:06.773 Set Features Save Field: Not Supported 00:39:06.773 Reservations: Not Supported 00:39:06.773 Timestamp: Not Supported 00:39:06.773 Copy: Supported 00:39:06.773 Volatile Write Cache: Present 00:39:06.773 Atomic Write Unit (Normal): 1 00:39:06.773 Atomic Write Unit (PFail): 1 00:39:06.773 Atomic Compare & Write Unit: 1 00:39:06.773 Fused Compare & Write: Supported 00:39:06.773 Scatter-Gather List 00:39:06.773 SGL Command Set: Supported (Dword aligned) 00:39:06.773 SGL Keyed: Not Supported 00:39:06.773 SGL Bit Bucket Descriptor: Not Supported 00:39:06.773 SGL Metadata Pointer: Not Supported 00:39:06.773 Oversized SGL: Not Supported 00:39:06.773 SGL Metadata Address: Not Supported 00:39:06.773 SGL Offset: Not Supported 00:39:06.773 Transport SGL Data Block: Not Supported 00:39:06.773 Replay Protected Memory Block: Not Supported 00:39:06.773 00:39:06.773 Firmware Slot Information 00:39:06.773 ========================= 00:39:06.773 Active slot: 1 00:39:06.773 Slot 1 Firmware Revision: 24.09 00:39:06.773 00:39:06.773 00:39:06.773 Commands Supported and Effects 00:39:06.773 ============================== 00:39:06.773 Admin Commands 00:39:06.773 -------------- 00:39:06.773 Get Log Page (02h): Supported 00:39:06.773 Identify (06h): Supported 00:39:06.773 Abort (08h): Supported 00:39:06.773 Set Features (09h): Supported 00:39:06.773 Get Features (0Ah): Supported 00:39:06.773 Asynchronous Event Request (0Ch): Supported 00:39:06.773 Keep Alive (18h): Supported 00:39:06.773 I/O Commands 00:39:06.773 ------------ 00:39:06.773 Flush (00h): Supported LBA-Change 00:39:06.773 Write (01h): Supported LBA-Change 00:39:06.773 Read (02h): Supported 00:39:06.773 Compare (05h): Supported 00:39:06.773 Write Zeroes (08h): Supported LBA-Change 00:39:06.773 Dataset Management (09h): Supported LBA-Change 00:39:06.773 Copy (19h): Supported LBA-Change 00:39:06.773 Unknown (79h): Supported LBA-Change 00:39:06.773 Unknown (7Ah): Supported 00:39:06.773 00:39:06.773 Error Log 00:39:06.773 ========= 00:39:06.773 00:39:06.773 Arbitration 00:39:06.773 =========== 00:39:06.773 Arbitration Burst: 1 00:39:06.773 00:39:06.773 Power Management 00:39:06.773 ================ 00:39:06.773 Number of Power States: 1 00:39:06.773 Current Power State: Power State #0 00:39:06.773 Power State #0: 00:39:06.773 Max Power: 0.00 W 00:39:06.773 Non-Operational State: Operational 00:39:06.773 Entry Latency: Not Reported 00:39:06.773 Exit Latency: Not Reported 00:39:06.773 Relative Read Throughput: 0 00:39:06.773 Relative Read Latency: 0 00:39:06.773 Relative Write Throughput: 0 00:39:06.773 Relative Write Latency: 0 00:39:06.773 Idle Power: Not Reported 00:39:06.773 Active Power: Not Reported 00:39:06.773 Non-Operational Permissive Mode: Not Supported 00:39:06.773 00:39:06.773 Health Information 00:39:06.773 ================== 00:39:06.773 Critical Warnings: 00:39:06.773 Available Spare Space: OK 00:39:06.773 Temperature: OK 00:39:06.773 Device Reliability: OK 00:39:06.773 Read Only: No 00:39:06.773 Volatile Memory Backup: OK 00:39:06.773 Current Temperature: 0 Kelvin (-2[2024-06-11 03:35:47.934130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:39:06.773 [2024-06-11 03:35:47.942014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:39:06.773 [2024-06-11 03:35:47.942042] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:39:06.773 [2024-06-11 03:35:47.942050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.773 [2024-06-11 03:35:47.942056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.773 [2024-06-11 03:35:47.942061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.773 [2024-06-11 03:35:47.942066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:06.773 [2024-06-11 03:35:47.942114] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:39:06.773 [2024-06-11 03:35:47.942123] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:39:06.773 [2024-06-11 03:35:47.943123] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:39:06.773 [2024-06-11 03:35:47.943165] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:39:06.773 [2024-06-11 03:35:47.943171] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:39:06.773 [2024-06-11 03:35:47.944130] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:39:06.773 [2024-06-11 03:35:47.944140] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:39:06.773 [2024-06-11 03:35:47.944191] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:39:06.773 [2024-06-11 03:35:47.945148] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:39:06.773 73 Celsius) 00:39:06.773 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:39:06.773 Available Spare: 0% 00:39:06.773 Available Spare Threshold: 0% 00:39:06.773 Life Percentage Used: 0% 00:39:06.773 Data Units Read: 0 00:39:06.773 Data Units Written: 0 00:39:06.773 Host Read Commands: 0 00:39:06.773 Host Write Commands: 0 00:39:06.773 Controller Busy Time: 0 minutes 00:39:06.773 Power Cycles: 0 00:39:06.773 Power On Hours: 0 hours 00:39:06.773 Unsafe Shutdowns: 0 00:39:06.773 Unrecoverable Media Errors: 0 00:39:06.773 Lifetime Error Log Entries: 0 00:39:06.773 Warning Temperature Time: 0 minutes 00:39:06.773 Critical Temperature Time: 0 minutes 00:39:06.773 00:39:06.773 Number of Queues 00:39:06.773 ================ 00:39:06.773 Number of I/O Submission Queues: 127 00:39:06.773 Number of I/O Completion Queues: 127 00:39:06.773 00:39:06.773 Active Namespaces 00:39:06.773 ================= 00:39:06.773 Namespace ID:1 00:39:06.773 Error Recovery Timeout: Unlimited 00:39:06.773 Command Set Identifier: NVM (00h) 00:39:06.773 Deallocate: Supported 00:39:06.773 Deallocated/Unwritten Error: Not Supported 00:39:06.773 Deallocated Read Value: Unknown 00:39:06.773 Deallocate in Write Zeroes: Not Supported 00:39:06.773 Deallocated Guard Field: 0xFFFF 00:39:06.773 Flush: Supported 00:39:06.773 Reservation: Supported 00:39:06.773 Namespace Sharing Capabilities: Multiple Controllers 00:39:06.773 Size (in LBAs): 131072 (0GiB) 00:39:06.773 Capacity (in LBAs): 131072 (0GiB) 00:39:06.773 Utilization (in LBAs): 131072 (0GiB) 00:39:06.773 NGUID: 91E9CCF0C1F743C59654F74979F212E4 00:39:06.773 UUID: 91e9ccf0-c1f7-43c5-9654-f74979f212e4 00:39:06.773 Thin Provisioning: Not Supported 00:39:06.773 Per-NS Atomic Units: Yes 00:39:06.773 Atomic Boundary Size (Normal): 0 00:39:06.773 Atomic Boundary Size (PFail): 0 00:39:06.773 Atomic Boundary Offset: 0 00:39:06.773 Maximum Single Source Range Length: 65535 00:39:06.773 Maximum Copy Length: 65535 00:39:06.773 Maximum Source Range Count: 1 00:39:06.773 NGUID/EUI64 Never Reused: No 00:39:06.773 Namespace Write Protected: No 00:39:06.773 Number of LBA Formats: 1 00:39:06.773 Current LBA Format: LBA Format #00 00:39:06.773 LBA Format #00: Data Size: 512 Metadata Size: 0 00:39:06.773 00:39:06.773 03:35:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:39:06.774 EAL: No free 2048 kB hugepages reported on node 1 00:39:06.774 [2024-06-11 03:35:48.153163] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:39:12.040 Initializing NVMe Controllers 00:39:12.040 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:39:12.040 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:39:12.040 Initialization complete. Launching workers. 00:39:12.040 ======================================================== 00:39:12.040 Latency(us) 00:39:12.040 Device Information : IOPS MiB/s Average min max 00:39:12.040 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39909.26 155.90 3206.88 941.52 7569.33 00:39:12.040 ======================================================== 00:39:12.040 Total : 39909.26 155.90 3206.88 941.52 7569.33 00:39:12.040 00:39:12.040 [2024-06-11 03:35:53.258262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:39:12.040 03:35:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:39:12.040 EAL: No free 2048 kB hugepages reported on node 1 00:39:12.298 [2024-06-11 03:35:53.475895] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:39:17.562 Initializing NVMe Controllers 00:39:17.562 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:39:17.562 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:39:17.562 Initialization complete. Launching workers. 00:39:17.562 ======================================================== 00:39:17.562 Latency(us) 00:39:17.562 Device Information : IOPS MiB/s Average min max 00:39:17.562 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39933.33 155.99 3205.18 942.41 7633.92 00:39:17.562 ======================================================== 00:39:17.562 Total : 39933.33 155.99 3205.18 942.41 7633.92 00:39:17.562 00:39:17.562 [2024-06-11 03:35:58.496277] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:39:17.562 03:35:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:39:17.562 EAL: No free 2048 kB hugepages reported on node 1 00:39:17.562 [2024-06-11 03:35:58.691481] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:39:22.826 [2024-06-11 03:36:03.828106] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:39:22.826 Initializing NVMe Controllers 00:39:22.826 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:39:22.826 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:39:22.826 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:39:22.826 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:39:22.826 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:39:22.826 Initialization complete. Launching workers. 00:39:22.826 Starting thread on core 2 00:39:22.826 Starting thread on core 3 00:39:22.826 Starting thread on core 1 00:39:22.826 03:36:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:39:22.826 EAL: No free 2048 kB hugepages reported on node 1 00:39:22.826 [2024-06-11 03:36:04.105396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:39:26.111 [2024-06-11 03:36:07.164278] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:39:26.111 Initializing NVMe Controllers 00:39:26.111 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:39:26.111 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:39:26.111 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:39:26.111 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:39:26.111 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:39:26.111 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:39:26.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:39:26.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:39:26.111 Initialization complete. Launching workers. 00:39:26.111 Starting thread on core 1 with urgent priority queue 00:39:26.111 Starting thread on core 2 with urgent priority queue 00:39:26.111 Starting thread on core 3 with urgent priority queue 00:39:26.111 Starting thread on core 0 with urgent priority queue 00:39:26.111 SPDK bdev Controller (SPDK2 ) core 0: 8924.33 IO/s 11.21 secs/100000 ios 00:39:26.111 SPDK bdev Controller (SPDK2 ) core 1: 9080.67 IO/s 11.01 secs/100000 ios 00:39:26.111 SPDK bdev Controller (SPDK2 ) core 2: 7164.67 IO/s 13.96 secs/100000 ios 00:39:26.111 SPDK bdev Controller (SPDK2 ) core 3: 10748.33 IO/s 9.30 secs/100000 ios 00:39:26.111 ======================================================== 00:39:26.111 00:39:26.111 03:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:39:26.111 EAL: No free 2048 kB hugepages reported on node 1 00:39:26.111 [2024-06-11 03:36:07.436485] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:39:26.111 Initializing NVMe Controllers 00:39:26.111 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:39:26.111 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:39:26.111 Namespace ID: 1 size: 0GB 00:39:26.111 Initialization complete. 00:39:26.111 INFO: using host memory buffer for IO 00:39:26.111 Hello world! 00:39:26.111 [2024-06-11 03:36:07.448558] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:39:26.111 03:36:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:39:26.369 EAL: No free 2048 kB hugepages reported on node 1 00:39:26.369 [2024-06-11 03:36:07.711680] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:39:27.747 Initializing NVMe Controllers 00:39:27.747 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:39:27.747 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:39:27.747 Initialization complete. Launching workers. 00:39:27.747 submit (in ns) avg, min, max = 5303.7, 3141.9, 4002230.5 00:39:27.747 complete (in ns) avg, min, max = 21299.5, 1720.0, 7988612.4 00:39:27.747 00:39:27.747 Submit histogram 00:39:27.747 ================ 00:39:27.747 Range in us Cumulative Count 00:39:27.747 3.139 - 3.154: 0.0118% ( 2) 00:39:27.747 3.154 - 3.170: 0.0177% ( 1) 00:39:27.747 3.170 - 3.185: 0.0355% ( 3) 00:39:27.747 3.185 - 3.200: 0.0769% ( 7) 00:39:27.747 3.200 - 3.215: 0.5675% ( 83) 00:39:27.747 3.215 - 3.230: 2.5183% ( 330) 00:39:27.747 3.230 - 3.246: 6.1953% ( 622) 00:39:27.747 3.246 - 3.261: 10.7413% ( 769) 00:39:27.747 3.261 - 3.276: 16.0026% ( 890) 00:39:27.747 3.276 - 3.291: 21.9496% ( 1006) 00:39:27.747 3.291 - 3.307: 27.2050% ( 889) 00:39:27.747 3.307 - 3.322: 32.7441% ( 937) 00:39:27.747 3.322 - 3.337: 38.8863% ( 1039) 00:39:27.747 3.337 - 3.352: 44.5259% ( 954) 00:39:27.747 3.352 - 3.368: 49.0778% ( 770) 00:39:27.747 3.368 - 3.383: 55.4682% ( 1081) 00:39:27.747 3.383 - 3.398: 62.3611% ( 1166) 00:39:27.747 3.398 - 3.413: 67.2440% ( 826) 00:39:27.747 3.413 - 3.429: 72.5999% ( 906) 00:39:27.747 3.429 - 3.444: 77.9558% ( 906) 00:39:27.747 3.444 - 3.459: 81.3727% ( 578) 00:39:27.747 3.459 - 3.474: 83.7136% ( 396) 00:39:27.747 3.474 - 3.490: 85.8063% ( 354) 00:39:27.747 3.490 - 3.505: 86.9177% ( 188) 00:39:27.747 3.505 - 3.520: 87.6921% ( 131) 00:39:27.747 3.520 - 3.535: 88.3424% ( 110) 00:39:27.747 3.535 - 3.550: 89.1286% ( 133) 00:39:27.747 3.550 - 3.566: 89.9326% ( 136) 00:39:27.747 3.566 - 3.581: 90.7070% ( 131) 00:39:27.747 3.581 - 3.596: 91.5878% ( 149) 00:39:27.747 3.596 - 3.611: 92.4746% ( 150) 00:39:27.747 3.611 - 3.627: 93.4795% ( 170) 00:39:27.747 3.627 - 3.642: 94.4550% ( 165) 00:39:27.747 3.642 - 3.657: 95.2294% ( 131) 00:39:27.747 3.657 - 3.672: 96.1043% ( 148) 00:39:27.747 3.672 - 3.688: 96.8255% ( 122) 00:39:27.747 3.688 - 3.703: 97.4166% ( 100) 00:39:27.747 3.703 - 3.718: 97.9073% ( 83) 00:39:27.747 3.718 - 3.733: 98.3802% ( 80) 00:39:27.747 3.733 - 3.749: 98.7349% ( 60) 00:39:27.747 3.749 - 3.764: 98.9773% ( 41) 00:39:27.747 3.764 - 3.779: 99.1606% ( 31) 00:39:27.747 3.779 - 3.794: 99.3556% ( 33) 00:39:27.747 3.794 - 3.810: 99.4443% ( 15) 00:39:27.747 3.810 - 3.825: 99.5034% ( 10) 00:39:27.747 3.825 - 3.840: 99.5389% ( 6) 00:39:27.747 3.840 - 3.855: 99.5566% ( 3) 00:39:27.747 3.855 - 3.870: 99.5744% ( 3) 00:39:27.747 3.870 - 3.886: 99.5862% ( 2) 00:39:27.747 3.886 - 3.901: 99.5921% ( 1) 00:39:27.747 3.901 - 3.931: 99.5980% ( 1) 00:39:27.747 3.962 - 3.992: 99.6098% ( 2) 00:39:27.747 4.023 - 4.053: 99.6157% ( 1) 00:39:27.747 6.004 - 6.034: 99.6217% ( 1) 00:39:27.747 6.309 - 6.339: 99.6276% ( 1) 00:39:27.747 6.674 - 6.705: 99.6335% ( 1) 00:39:27.747 6.766 - 6.796: 99.6453% ( 2) 00:39:27.747 6.888 - 6.918: 99.6512% ( 1) 00:39:27.747 6.949 - 6.979: 99.6571% ( 1) 00:39:27.747 6.979 - 7.010: 99.6690% ( 2) 00:39:27.747 7.070 - 7.101: 99.6808% ( 2) 00:39:27.747 7.101 - 7.131: 99.6867% ( 1) 00:39:27.747 7.162 - 7.192: 99.6985% ( 2) 00:39:27.747 7.223 - 7.253: 99.7044% ( 1) 00:39:27.747 7.345 - 7.375: 99.7103% ( 1) 00:39:27.747 7.406 - 7.436: 99.7162% ( 1) 00:39:27.747 7.436 - 7.467: 99.7222% ( 1) 00:39:27.747 7.497 - 7.528: 99.7340% ( 2) 00:39:27.747 7.528 - 7.558: 99.7399% ( 1) 00:39:27.747 7.558 - 7.589: 99.7576% ( 3) 00:39:27.747 7.619 - 7.650: 99.7635% ( 1) 00:39:27.747 7.650 - 7.680: 99.7754% ( 2) 00:39:27.747 7.680 - 7.710: 99.7872% ( 2) 00:39:27.747 7.710 - 7.741: 99.7990% ( 2) 00:39:27.748 7.802 - 7.863: 99.8108% ( 2) 00:39:27.748 7.924 - 7.985: 99.8286% ( 3) 00:39:27.748 7.985 - 8.046: 99.8345% ( 1) 00:39:27.748 8.046 - 8.107: 99.8404% ( 1) 00:39:27.748 8.168 - 8.229: 99.8522% ( 2) 00:39:27.748 8.229 - 8.290: 99.8581% ( 1) 00:39:27.748 8.290 - 8.350: 99.8640% ( 1) 00:39:27.748 8.350 - 8.411: 99.8699% ( 1) 00:39:27.748 8.533 - 8.594: 99.8877% ( 3) 00:39:27.748 8.594 - 8.655: 99.8936% ( 1) 00:39:27.748 8.838 - 8.899: 99.9054% ( 2) 00:39:27.748 8.899 - 8.960: 99.9113% ( 1) 00:39:27.748 9.021 - 9.082: 99.9172% ( 1) 00:39:27.748 9.082 - 9.143: 99.9231% ( 1) 00:39:27.748 9.752 - 9.813: 99.9291% ( 1) 00:39:27.748 9.874 - 9.935: 99.9350% ( 1) 00:39:27.748 10.301 - 10.362: 99.9409% ( 1) 00:39:27.748 15.360 - 15.421: 99.9468% ( 1) 00:39:27.748 1022.050 - 1029.851: 99.9527% ( 1) 00:39:27.748 3198.781 - 3214.385: 99.9586% ( 1) 00:39:27.748 3994.575 - 4025.783: 100.0000% ( 7) 00:39:27.748 00:39:27.748 Complete histogram 00:39:27.748 ================== 00:39:27.748 Range in us Cumulative Count 00:39:27.748 1.714 - 1.722: 0.0414% ( 7) 00:39:27.748 1.722 - 1.730: 0.1123% ( 12) 00:39:27.748 1.730 - 1.737: 0.1655% ( 9) 00:39:27.748 1.745 - 1.752: 0.1773% ( 2) 00:39:27.748 1.752 - 1.760: 0.2187% ( 7) 00:39:27.748 1.760 - 1.768: 1.3774% ( 196) 00:39:27.748 1.768 - 1.775: 10.8004% ( 1594) 00:39:27.748 1.775 - 1.783: 28.2100% ( 2945) 00:39:27.748 1.783 - 1.790: 38.1828% ( 1687) 00:39:27.748 1.790 - 1.798: 41.6943% ( 594) 00:39:27.748 1.798 - 1.806: 44.2126% ( 426) 00:39:27.748 1.806 - 1.813: 46.4826% ( 384) 00:39:27.748 1.813 - 1.821: 48.2561% ( 300) 00:39:27.748 1.821 - 1.829: 53.9135% ( 957) 00:39:27.748 1.829 - 1.836: 70.0993% ( 2738) 00:39:27.748 1.836 - 1.844: 86.1256% ( 2711) 00:39:27.748 1.844 - 1.851: 92.5692% ( 1090) 00:39:27.748 1.851 - 1.859: 94.1948% ( 275) 00:39:27.748 1.859 - 1.867: 95.9269% ( 293) 00:39:27.748 1.867 - 1.874: 97.3989% ( 249) 00:39:27.748 1.874 - 1.882: 98.1083% ( 120) 00:39:27.748 1.882 - 1.890: 98.3625% ( 43) 00:39:27.748 1.890 - 1.897: 98.5103% ( 25) 00:39:27.748 1.897 - 1.905: 98.6640% ( 26) 00:39:27.748 1.905 - 1.912: 98.8768% ( 36) 00:39:27.748 1.912 - 1.920: 99.0719% ( 33) 00:39:27.748 1.920 - 1.928: 99.1369% ( 11) 00:39:27.748 1.928 - 1.935: 99.1606% ( 4) 00:39:27.748 1.935 - 1.943: 99.1960% ( 6) 00:39:27.748 1.943 - 1.950: 99.2079% ( 2) 00:39:27.748 1.950 - 1.966: 99.2374% ( 5) 00:39:27.748 1.966 - 1.981: 99.2670% ( 5) 00:39:27.748 1.996 - 2.011: 99.2906% ( 4) 00:39:27.748 2.011 - 2.027: 99.3024% ( 2) 00:39:27.748 2.042 - 2.057: 99.3143% ( 2) 00:39:27.748 2.057 - 2.072: 99.3202% ( 1) 00:39:27.748 3.429 - 3.444: 99.3261% ( 1) 00:39:27.748 3.825 - 3.840: 99.3320% ( 1) 00:39:27.748 5.120 - 5.150: 99.3379% ( 1) 00:39:27.748 5.242 - 5.272: 99.3438% ( 1) 00:39:27.748 5.425 - 5.455: 99.3497% ( 1) 00:39:27.748 5.608 - 5.638: 99.3556% ( 1) 00:39:27.748 5.638 - 5.669: 99.3675% ( 2) 00:39:27.748 5.821 - 5.851: 99.3852% ( 3) 00:39:27.748 5.851 - 5.882: 99.3911% ( 1) 00:39:27.748 5.882 - 5.912: 99.3970% ( 1) 00:39:27.748 5.973 - 6.004: 99.4029% ( 1) 00:39:27.748 6.065 - 6.095: 99.4088% ( 1) 00:39:27.748 6.248 - 6.278: 99.4148% ( 1) 00:39:27.748 6.339 - 6.370: 99.4207% ( 1) 00:39:27.748 6.370 - 6.400: 99.4266% ( 1) 00:39:27.748 6.430 - 6.461: 99.4325% ( 1) 00:39:27.748 6.491 - 6.522: 99.4443% ( 2) 00:39:27.748 6.522 - 6.552: 99.4502% ( 1) 00:39:27.748 6.644 - 6.674: 99.4561% ( 1) 00:39:27.748 6.766 - 6.796: 99.4620% ( 1) 00:39:27.748 6.796 - 6.827: 99.4680% ( 1) 00:39:27.748 6.857 - 6.888: 99.4739% ( 1) 00:39:27.748 6.949 - 6.979: 99.4798% ( 1) 00:39:27.748 7.528 - 7.558: 99.4857% ( 1) 00:39:27.748 7.589 - 7.619: 99.4916% ( 1) 00:39:27.748 7.741 - 7.771: 99.4975% ( 1) 00:39:27.748 8.046 - 8.107: 99.5034% ( 1) 00:39:27.748 12.190 - 12.251: 99.5093% ( 1) 00:39:27.748 14.629 - 14.690: 99.5153% ( 1) 00:39:27.748 43.398 - 43.642: 99.5212% ( 1) 00:39:27.748 146.286 - 147.261: 99.5271% ( 1) 00:39:27.748 3183.177 - 3198.781: 99.5330% ( 1) 00:39:27.748 3417.234 - 3432.838: 99.5389% ( 1) 00:39:27.748 3994.575 - 4025.7[2024-06-11 03:36:08.802002] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:39:27.748 83: 99.9823% ( 75) 00:39:27.748 6990.507 - 7021.714: 99.9882% ( 1) 00:39:27.748 7957.943 - 7989.150: 100.0000% ( 2) 00:39:27.748 00:39:27.748 03:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:39:27.748 03:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:39:27.748 03:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:39:27.748 03:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:39:27.748 03:36:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:39:27.748 [ 00:39:27.748 { 00:39:27.748 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:39:27.748 "subtype": "Discovery", 00:39:27.748 "listen_addresses": [], 00:39:27.748 "allow_any_host": true, 00:39:27.748 "hosts": [] 00:39:27.748 }, 00:39:27.748 { 00:39:27.748 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:39:27.748 "subtype": "NVMe", 00:39:27.748 "listen_addresses": [ 00:39:27.748 { 00:39:27.748 "trtype": "VFIOUSER", 00:39:27.748 "adrfam": "IPv4", 00:39:27.748 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:39:27.748 "trsvcid": "0" 00:39:27.748 } 00:39:27.748 ], 00:39:27.748 "allow_any_host": true, 00:39:27.748 "hosts": [], 00:39:27.748 "serial_number": "SPDK1", 00:39:27.748 "model_number": "SPDK bdev Controller", 00:39:27.748 "max_namespaces": 32, 00:39:27.748 "min_cntlid": 1, 00:39:27.748 "max_cntlid": 65519, 00:39:27.748 "namespaces": [ 00:39:27.748 { 00:39:27.748 "nsid": 1, 00:39:27.748 "bdev_name": "Malloc1", 00:39:27.748 "name": "Malloc1", 00:39:27.748 "nguid": "323972873CD84EE48EB1C539F0F04B1F", 00:39:27.748 "uuid": "32397287-3cd8-4ee4-8eb1-c539f0f04b1f" 00:39:27.748 }, 00:39:27.748 { 00:39:27.748 "nsid": 2, 00:39:27.748 "bdev_name": "Malloc3", 00:39:27.748 "name": "Malloc3", 00:39:27.748 "nguid": "1BA9A4ED5A7B44ED963745219692C8A5", 00:39:27.748 "uuid": "1ba9a4ed-5a7b-44ed-9637-45219692c8a5" 00:39:27.748 } 00:39:27.748 ] 00:39:27.748 }, 00:39:27.748 { 00:39:27.748 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:39:27.748 "subtype": "NVMe", 00:39:27.748 "listen_addresses": [ 00:39:27.748 { 00:39:27.748 "trtype": "VFIOUSER", 00:39:27.748 "adrfam": "IPv4", 00:39:27.748 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:39:27.748 "trsvcid": "0" 00:39:27.748 } 00:39:27.748 ], 00:39:27.748 "allow_any_host": true, 00:39:27.748 "hosts": [], 00:39:27.748 "serial_number": "SPDK2", 00:39:27.748 "model_number": "SPDK bdev Controller", 00:39:27.748 "max_namespaces": 32, 00:39:27.748 "min_cntlid": 1, 00:39:27.748 "max_cntlid": 65519, 00:39:27.748 "namespaces": [ 00:39:27.748 { 00:39:27.748 "nsid": 1, 00:39:27.748 "bdev_name": "Malloc2", 00:39:27.748 "name": "Malloc2", 00:39:27.748 "nguid": "91E9CCF0C1F743C59654F74979F212E4", 00:39:27.748 "uuid": "91e9ccf0-c1f7-43c5-9654-f74979f212e4" 00:39:27.748 } 00:39:27.748 ] 00:39:27.748 } 00:39:27.748 ] 00:39:27.748 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:39:27.748 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2124883 00:39:27.748 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:39:27.748 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:39:27.748 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # local i=0 00:39:27.748 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:39:27.748 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:39:27.748 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1275 -- # return 0 00:39:27.748 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:39:27.748 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:39:27.748 EAL: No free 2048 kB hugepages reported on node 1 00:39:28.007 [2024-06-11 03:36:09.169728] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:39:28.007 Malloc4 00:39:28.007 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:39:28.007 [2024-06-11 03:36:09.394422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:39:28.265 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:39:28.265 Asynchronous Event Request test 00:39:28.265 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:39:28.265 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:39:28.265 Registering asynchronous event callbacks... 00:39:28.265 Starting namespace attribute notice tests for all controllers... 00:39:28.265 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:39:28.265 aer_cb - Changed Namespace 00:39:28.265 Cleaning up... 00:39:28.265 [ 00:39:28.265 { 00:39:28.265 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:39:28.265 "subtype": "Discovery", 00:39:28.265 "listen_addresses": [], 00:39:28.265 "allow_any_host": true, 00:39:28.265 "hosts": [] 00:39:28.265 }, 00:39:28.265 { 00:39:28.265 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:39:28.265 "subtype": "NVMe", 00:39:28.265 "listen_addresses": [ 00:39:28.265 { 00:39:28.265 "trtype": "VFIOUSER", 00:39:28.265 "adrfam": "IPv4", 00:39:28.265 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:39:28.265 "trsvcid": "0" 00:39:28.265 } 00:39:28.265 ], 00:39:28.265 "allow_any_host": true, 00:39:28.265 "hosts": [], 00:39:28.265 "serial_number": "SPDK1", 00:39:28.265 "model_number": "SPDK bdev Controller", 00:39:28.265 "max_namespaces": 32, 00:39:28.265 "min_cntlid": 1, 00:39:28.265 "max_cntlid": 65519, 00:39:28.265 "namespaces": [ 00:39:28.265 { 00:39:28.265 "nsid": 1, 00:39:28.265 "bdev_name": "Malloc1", 00:39:28.265 "name": "Malloc1", 00:39:28.265 "nguid": "323972873CD84EE48EB1C539F0F04B1F", 00:39:28.265 "uuid": "32397287-3cd8-4ee4-8eb1-c539f0f04b1f" 00:39:28.265 }, 00:39:28.265 { 00:39:28.265 "nsid": 2, 00:39:28.265 "bdev_name": "Malloc3", 00:39:28.265 "name": "Malloc3", 00:39:28.265 "nguid": "1BA9A4ED5A7B44ED963745219692C8A5", 00:39:28.265 "uuid": "1ba9a4ed-5a7b-44ed-9637-45219692c8a5" 00:39:28.265 } 00:39:28.265 ] 00:39:28.265 }, 00:39:28.265 { 00:39:28.265 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:39:28.265 "subtype": "NVMe", 00:39:28.265 "listen_addresses": [ 00:39:28.265 { 00:39:28.265 "trtype": "VFIOUSER", 00:39:28.265 "adrfam": "IPv4", 00:39:28.265 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:39:28.265 "trsvcid": "0" 00:39:28.265 } 00:39:28.265 ], 00:39:28.265 "allow_any_host": true, 00:39:28.265 "hosts": [], 00:39:28.265 "serial_number": "SPDK2", 00:39:28.265 "model_number": "SPDK bdev Controller", 00:39:28.265 "max_namespaces": 32, 00:39:28.265 "min_cntlid": 1, 00:39:28.265 "max_cntlid": 65519, 00:39:28.265 "namespaces": [ 00:39:28.265 { 00:39:28.265 "nsid": 1, 00:39:28.265 "bdev_name": "Malloc2", 00:39:28.265 "name": "Malloc2", 00:39:28.265 "nguid": "91E9CCF0C1F743C59654F74979F212E4", 00:39:28.265 "uuid": "91e9ccf0-c1f7-43c5-9654-f74979f212e4" 00:39:28.265 }, 00:39:28.265 { 00:39:28.265 "nsid": 2, 00:39:28.266 "bdev_name": "Malloc4", 00:39:28.266 "name": "Malloc4", 00:39:28.266 "nguid": "25D0F2CDC995406E87ACB7EC609B4F9F", 00:39:28.266 "uuid": "25d0f2cd-c995-406e-87ac-b7ec609b4f9f" 00:39:28.266 } 00:39:28.266 ] 00:39:28.266 } 00:39:28.266 ] 00:39:28.266 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2124883 00:39:28.266 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:39:28.266 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2116877 00:39:28.266 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 2116877 ']' 00:39:28.266 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 2116877 00:39:28.266 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:39:28.266 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:28.266 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2116877 00:39:28.266 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:28.266 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:28.266 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2116877' 00:39:28.266 killing process with pid 2116877 00:39:28.266 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 2116877 00:39:28.266 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 2116877 00:39:28.524 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:39:28.524 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:39:28.524 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:39:28.524 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:39:28.524 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:39:28.524 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2125049 00:39:28.524 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2125049' 00:39:28.524 Process pid: 2125049 00:39:28.524 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:39:28.524 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:39:28.524 03:36:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2125049 00:39:28.524 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # '[' -z 2125049 ']' 00:39:28.524 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:28.524 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:28.524 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:28.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:28.524 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:28.524 03:36:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:39:28.782 [2024-06-11 03:36:09.949598] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:28.782 [2024-06-11 03:36:09.950474] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:39:28.782 [2024-06-11 03:36:09.950513] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:28.782 EAL: No free 2048 kB hugepages reported on node 1 00:39:28.782 [2024-06-11 03:36:10.011173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:28.782 [2024-06-11 03:36:10.058305] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:28.782 [2024-06-11 03:36:10.058344] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:28.782 [2024-06-11 03:36:10.058351] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:28.782 [2024-06-11 03:36:10.058357] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:28.782 [2024-06-11 03:36:10.058362] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:28.782 [2024-06-11 03:36:10.058405] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:28.783 [2024-06-11 03:36:10.058423] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:39:28.783 [2024-06-11 03:36:10.058510] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:39:28.783 [2024-06-11 03:36:10.058510] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:39:28.783 [2024-06-11 03:36:10.128724] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:28.783 [2024-06-11 03:36:10.128843] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:28.783 [2024-06-11 03:36:10.129115] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:28.783 [2024-06-11 03:36:10.129437] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:28.783 [2024-06-11 03:36:10.129703] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:28.783 03:36:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:28.783 03:36:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@863 -- # return 0 00:39:28.783 03:36:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:39:30.155 03:36:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:39:30.155 03:36:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:39:30.155 03:36:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:39:30.155 03:36:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:39:30.155 03:36:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:39:30.155 03:36:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:39:30.155 Malloc1 00:39:30.155 03:36:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:39:30.413 03:36:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:39:30.671 03:36:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:39:30.671 03:36:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:39:30.671 03:36:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:39:30.671 03:36:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:39:30.930 Malloc2 00:39:30.930 03:36:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:39:31.188 03:36:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:39:31.188 03:36:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:39:31.447 03:36:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:39:31.447 03:36:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2125049 00:39:31.447 03:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@949 -- # '[' -z 2125049 ']' 00:39:31.447 03:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # kill -0 2125049 00:39:31.447 03:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # uname 00:39:31.447 03:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:31.447 03:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2125049 00:39:31.447 03:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:31.447 03:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:31.447 03:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2125049' 00:39:31.447 killing process with pid 2125049 00:39:31.447 03:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@968 -- # kill 2125049 00:39:31.447 03:36:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@973 -- # wait 2125049 00:39:31.706 03:36:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:39:31.706 03:36:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:39:31.706 00:39:31.706 real 0m49.894s 00:39:31.706 user 3m17.684s 00:39:31.706 sys 0m3.352s 00:39:31.706 03:36:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:31.706 03:36:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:39:31.706 ************************************ 00:39:31.706 END TEST nvmf_vfio_user 00:39:31.706 ************************************ 00:39:31.706 03:36:13 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:39:31.706 03:36:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:39:31.706 03:36:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:31.706 03:36:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:31.706 ************************************ 00:39:31.706 START TEST nvmf_vfio_user_nvme_compliance 00:39:31.706 ************************************ 00:39:31.706 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:39:31.964 * Looking for test storage... 00:39:31.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2125761 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2125761' 00:39:31.965 Process pid: 2125761 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2125761 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # '[' -z 2125761 ']' 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:31.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:31.965 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:39:31.965 [2024-06-11 03:36:13.218281] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:39:31.965 [2024-06-11 03:36:13.218329] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:31.965 EAL: No free 2048 kB hugepages reported on node 1 00:39:31.965 [2024-06-11 03:36:13.279222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:31.965 [2024-06-11 03:36:13.319768] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:31.965 [2024-06-11 03:36:13.319805] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:31.965 [2024-06-11 03:36:13.319812] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:31.965 [2024-06-11 03:36:13.319818] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:31.965 [2024-06-11 03:36:13.319823] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:31.965 [2024-06-11 03:36:13.319872] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:39:31.965 [2024-06-11 03:36:13.319890] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:39:31.965 [2024-06-11 03:36:13.319895] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:39:32.223 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:32.223 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@863 -- # return 0 00:39:32.223 03:36:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:39:33.158 malloc0 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:33.158 03:36:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:39:33.158 EAL: No free 2048 kB hugepages reported on node 1 00:39:33.416 00:39:33.416 00:39:33.416 CUnit - A unit testing framework for C - Version 2.1-3 00:39:33.416 http://cunit.sourceforge.net/ 00:39:33.416 00:39:33.416 00:39:33.416 Suite: nvme_compliance 00:39:33.416 Test: admin_identify_ctrlr_verify_dptr ...[2024-06-11 03:36:14.635497] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:33.416 [2024-06-11 03:36:14.636796] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:39:33.416 [2024-06-11 03:36:14.636810] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:39:33.416 [2024-06-11 03:36:14.636816] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:39:33.416 [2024-06-11 03:36:14.638517] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:33.416 passed 00:39:33.416 Test: admin_identify_ctrlr_verify_fused ...[2024-06-11 03:36:14.717054] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:33.416 [2024-06-11 03:36:14.720085] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:33.416 passed 00:39:33.416 Test: admin_identify_ns ...[2024-06-11 03:36:14.799750] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:33.675 [2024-06-11 03:36:14.859019] ctrlr.c:2710:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:39:33.675 [2024-06-11 03:36:14.867021] ctrlr.c:2710:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:39:33.675 [2024-06-11 03:36:14.888116] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:33.675 passed 00:39:33.675 Test: admin_get_features_mandatory_features ...[2024-06-11 03:36:14.965778] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:33.675 [2024-06-11 03:36:14.968799] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:33.675 passed 00:39:33.675 Test: admin_get_features_optional_features ...[2024-06-11 03:36:15.047304] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:33.675 [2024-06-11 03:36:15.050330] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:33.675 passed 00:39:33.963 Test: admin_set_features_number_of_queues ...[2024-06-11 03:36:15.127289] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:33.963 [2024-06-11 03:36:15.233106] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:33.963 passed 00:39:33.963 Test: admin_get_log_page_mandatory_logs ...[2024-06-11 03:36:15.309931] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:33.963 [2024-06-11 03:36:15.312950] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:34.227 passed 00:39:34.227 Test: admin_get_log_page_with_lpo ...[2024-06-11 03:36:15.388347] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:34.227 [2024-06-11 03:36:15.460022] ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:39:34.227 [2024-06-11 03:36:15.473061] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:34.227 passed 00:39:34.227 Test: fabric_property_get ...[2024-06-11 03:36:15.547933] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:34.227 [2024-06-11 03:36:15.549162] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:39:34.227 [2024-06-11 03:36:15.550957] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:34.227 passed 00:39:34.227 Test: admin_delete_io_sq_use_admin_qid ...[2024-06-11 03:36:15.629478] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:34.227 [2024-06-11 03:36:15.630838] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:39:34.485 [2024-06-11 03:36:15.632492] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:34.485 passed 00:39:34.485 Test: admin_delete_io_sq_delete_sq_twice ...[2024-06-11 03:36:15.710345] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:34.485 [2024-06-11 03:36:15.795031] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:39:34.485 [2024-06-11 03:36:15.811019] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:39:34.485 [2024-06-11 03:36:15.816104] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:34.485 passed 00:39:34.485 Test: admin_delete_io_cq_use_admin_qid ...[2024-06-11 03:36:15.889968] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:34.743 [2024-06-11 03:36:15.891185] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:39:34.743 [2024-06-11 03:36:15.892988] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:34.743 passed 00:39:34.743 Test: admin_delete_io_cq_delete_cq_first ...[2024-06-11 03:36:15.969750] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:34.743 [2024-06-11 03:36:16.045017] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:39:34.743 [2024-06-11 03:36:16.068017] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:39:34.743 [2024-06-11 03:36:16.073094] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:34.744 passed 00:39:35.001 Test: admin_create_io_cq_verify_iv_pc ...[2024-06-11 03:36:16.149667] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:35.001 [2024-06-11 03:36:16.150876] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:39:35.001 [2024-06-11 03:36:16.150900] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:39:35.001 [2024-06-11 03:36:16.152686] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:35.001 passed 00:39:35.001 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-06-11 03:36:16.229429] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:35.001 [2024-06-11 03:36:16.321016] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:39:35.001 [2024-06-11 03:36:16.329020] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:39:35.001 [2024-06-11 03:36:16.337024] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:39:35.001 [2024-06-11 03:36:16.345014] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:39:35.001 [2024-06-11 03:36:16.374108] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:35.001 passed 00:39:35.259 Test: admin_create_io_sq_verify_pc ...[2024-06-11 03:36:16.449749] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:35.259 [2024-06-11 03:36:16.466025] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:39:35.259 [2024-06-11 03:36:16.483932] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:35.259 passed 00:39:35.259 Test: admin_create_io_qp_max_qps ...[2024-06-11 03:36:16.561450] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:36.636 [2024-06-11 03:36:17.665017] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:39:36.636 [2024-06-11 03:36:18.039138] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:36.894 passed 00:39:36.894 Test: admin_create_io_sq_shared_cq ...[2024-06-11 03:36:18.118292] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:39:36.894 [2024-06-11 03:36:18.251019] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:39:36.894 [2024-06-11 03:36:18.288078] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:39:37.153 passed 00:39:37.153 00:39:37.153 Run Summary: Type Total Ran Passed Failed Inactive 00:39:37.153 suites 1 1 n/a 0 0 00:39:37.153 tests 18 18 18 0 0 00:39:37.153 asserts 360 360 360 0 n/a 00:39:37.153 00:39:37.153 Elapsed time = 1.497 seconds 00:39:37.153 03:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2125761 00:39:37.153 03:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@949 -- # '[' -z 2125761 ']' 00:39:37.153 03:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # kill -0 2125761 00:39:37.153 03:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # uname 00:39:37.153 03:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:39:37.153 03:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2125761 00:39:37.153 03:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:39:37.153 03:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:39:37.153 03:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2125761' 00:39:37.153 killing process with pid 2125761 00:39:37.153 03:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # kill 2125761 00:39:37.153 03:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # wait 2125761 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:39:37.412 00:39:37.412 real 0m5.466s 00:39:37.412 user 0m15.557s 00:39:37.412 sys 0m0.421s 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # xtrace_disable 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:39:37.412 ************************************ 00:39:37.412 END TEST nvmf_vfio_user_nvme_compliance 00:39:37.412 ************************************ 00:39:37.412 03:36:18 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:39:37.412 03:36:18 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:39:37.412 03:36:18 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:39:37.412 03:36:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:37.412 ************************************ 00:39:37.412 START TEST nvmf_vfio_user_fuzz 00:39:37.412 ************************************ 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:39:37.412 * Looking for test storage... 00:39:37.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:39:37.412 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2126734 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2126734' 00:39:37.413 Process pid: 2126734 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2126734 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # '[' -z 2126734 ']' 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:37.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:39:37.413 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:39:37.672 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:39:37.672 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@863 -- # return 0 00:39:37.672 03:36:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:39:38.608 03:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:39:38.608 03:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.608 03:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:39:38.608 03:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.608 03:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:39:38.608 03:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:39:38.608 03:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.608 03:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:39:38.608 malloc0 00:39:38.608 03:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.608 03:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:39:38.608 03:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.608 03:36:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:39:38.608 03:36:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.608 03:36:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:39:38.608 03:36:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.608 03:36:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:39:38.866 03:36:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.866 03:36:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:39:38.866 03:36:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:38.866 03:36:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:39:38.866 03:36:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:38.866 03:36:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:39:38.866 03:36:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:40:10.929 Fuzzing completed. Shutting down the fuzz application 00:40:10.929 00:40:10.929 Dumping successful admin opcodes: 00:40:10.929 8, 9, 10, 24, 00:40:10.929 Dumping successful io opcodes: 00:40:10.929 0, 00:40:10.929 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1167328, total successful commands: 4595, random_seed: 790698560 00:40:10.930 NS: 0x200003a1ef00 admin qp, Total commands completed: 290464, total successful commands: 2351, random_seed: 2249780992 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2126734 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@949 -- # '[' -z 2126734 ']' 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # kill -0 2126734 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # uname 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2126734 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2126734' 00:40:10.930 killing process with pid 2126734 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # kill 2126734 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # wait 2126734 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:40:10.930 00:40:10.930 real 0m32.085s 00:40:10.930 user 0m34.682s 00:40:10.930 sys 0m26.260s 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:10.930 03:36:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:40:10.930 ************************************ 00:40:10.930 END TEST nvmf_vfio_user_fuzz 00:40:10.930 ************************************ 00:40:10.930 03:36:50 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:40:10.930 03:36:50 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:40:10.930 03:36:50 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:10.930 03:36:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:10.930 ************************************ 00:40:10.930 START TEST nvmf_host_management 00:40:10.930 ************************************ 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:40:10.930 * Looking for test storage... 00:40:10.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:40:10.930 03:36:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:40:16.205 Found 0000:86:00.0 (0x8086 - 0x159b) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:40:16.205 Found 0000:86:00.1 (0x8086 - 0x159b) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:40:16.205 Found net devices under 0000:86:00.0: cvl_0_0 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:16.205 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:40:16.206 Found net devices under 0000:86:00.1: cvl_0_1 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:16.206 03:36:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:16.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:16.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:40:16.206 00:40:16.206 --- 10.0.0.2 ping statistics --- 00:40:16.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:16.206 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:16.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:16.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:40:16.206 00:40:16.206 --- 10.0.0.1 ping statistics --- 00:40:16.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:16.206 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2135316 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2135316 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 2135316 ']' 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:16.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.206 [2024-06-11 03:36:57.158404] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:40:16.206 [2024-06-11 03:36:57.158444] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:16.206 EAL: No free 2048 kB hugepages reported on node 1 00:40:16.206 [2024-06-11 03:36:57.221703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:16.206 [2024-06-11 03:36:57.264048] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:16.206 [2024-06-11 03:36:57.264084] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:16.206 [2024-06-11 03:36:57.264091] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:16.206 [2024-06-11 03:36:57.264096] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:16.206 [2024-06-11 03:36:57.264101] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:16.206 [2024-06-11 03:36:57.264202] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:40:16.206 [2024-06-11 03:36:57.264266] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:40:16.206 [2024-06-11 03:36:57.264374] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:40:16.206 [2024-06-11 03:36:57.264376] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.206 [2024-06-11 03:36:57.397990] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.206 Malloc0 00:40:16.206 [2024-06-11 03:36:57.457109] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:40:16.206 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2135362 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2135362 /var/tmp/bdevperf.sock 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 2135362 ']' 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:16.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:16.207 { 00:40:16.207 "params": { 00:40:16.207 "name": "Nvme$subsystem", 00:40:16.207 "trtype": "$TEST_TRANSPORT", 00:40:16.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:16.207 "adrfam": "ipv4", 00:40:16.207 "trsvcid": "$NVMF_PORT", 00:40:16.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:16.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:16.207 "hdgst": ${hdgst:-false}, 00:40:16.207 "ddgst": ${ddgst:-false} 00:40:16.207 }, 00:40:16.207 "method": "bdev_nvme_attach_controller" 00:40:16.207 } 00:40:16.207 EOF 00:40:16.207 )") 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:40:16.207 03:36:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:16.207 "params": { 00:40:16.207 "name": "Nvme0", 00:40:16.207 "trtype": "tcp", 00:40:16.207 "traddr": "10.0.0.2", 00:40:16.207 "adrfam": "ipv4", 00:40:16.207 "trsvcid": "4420", 00:40:16.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:16.207 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:16.207 "hdgst": false, 00:40:16.207 "ddgst": false 00:40:16.207 }, 00:40:16.207 "method": "bdev_nvme_attach_controller" 00:40:16.207 }' 00:40:16.207 [2024-06-11 03:36:57.548195] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:40:16.207 [2024-06-11 03:36:57.548241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2135362 ] 00:40:16.207 EAL: No free 2048 kB hugepages reported on node 1 00:40:16.466 [2024-06-11 03:36:57.609040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:16.466 [2024-06-11 03:36:57.649372] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:16.725 Running I/O for 10 seconds... 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:40:16.725 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:40:16.984 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:40:16.984 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:16.984 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:16.984 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:16.984 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:16.984 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:16.984 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.245 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:40:17.245 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:40:17.245 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:40:17.245 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:40:17.245 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:40:17.245 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:17.245 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.245 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:17.245 [2024-06-11 03:36:58.408333] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.245 [2024-06-11 03:36:58.408381] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.245 [2024-06-11 03:36:58.408388] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.245 [2024-06-11 03:36:58.408394] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.245 [2024-06-11 03:36:58.408401] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.245 [2024-06-11 03:36:58.408407] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.245 [2024-06-11 03:36:58.408413] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.245 [2024-06-11 03:36:58.408419] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.245 [2024-06-11 03:36:58.408424] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.245 [2024-06-11 03:36:58.408431] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408436] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408442] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408448] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408454] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408460] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408466] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408472] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408477] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408488] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408494] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408500] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408506] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408511] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408517] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408522] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408528] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408534] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408539] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408545] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408550] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408556] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408561] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408567] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408573] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408580] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408585] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408591] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408597] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408602] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408608] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408613] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408619] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408624] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408630] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408635] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408642] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408648] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408653] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408659] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408664] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408670] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408676] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408681] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408686] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408692] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408698] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408703] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408709] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408714] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408720] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408725] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408731] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408736] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ebf560 is same with the state(5) to be set 00:40:17.246 [2024-06-11 03:36:58.408968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.246 [2024-06-11 03:36:58.409000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.246 [2024-06-11 03:36:58.409023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.246 [2024-06-11 03:36:58.409031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.246 [2024-06-11 03:36:58.409040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.246 [2024-06-11 03:36:58.409047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.246 [2024-06-11 03:36:58.409055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.246 [2024-06-11 03:36:58.409062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.246 [2024-06-11 03:36:58.409070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.246 [2024-06-11 03:36:58.409080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.246 [2024-06-11 03:36:58.409088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.246 [2024-06-11 03:36:58.409095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.246 [2024-06-11 03:36:58.409103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.246 [2024-06-11 03:36:58.409109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.246 [2024-06-11 03:36:58.409118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.246 [2024-06-11 03:36:58.409124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.246 [2024-06-11 03:36:58.409132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.246 [2024-06-11 03:36:58.409138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.246 [2024-06-11 03:36:58.409146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.246 [2024-06-11 03:36:58.409152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.246 [2024-06-11 03:36:58.409161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.246 [2024-06-11 03:36:58.409167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.246 [2024-06-11 03:36:58.409175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.246 [2024-06-11 03:36:58.409182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.246 [2024-06-11 03:36:58.409190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.246 [2024-06-11 03:36:58.409196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.246 [2024-06-11 03:36:58.409204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.246 [2024-06-11 03:36:58.409211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.246 [2024-06-11 03:36:58.409218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.246 [2024-06-11 03:36:58.409225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.246 [2024-06-11 03:36:58.409233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.247 [2024-06-11 03:36:58.409815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.247 [2024-06-11 03:36:58.409823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.248 [2024-06-11 03:36:58.409830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.248 [2024-06-11 03:36:58.409838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.248 [2024-06-11 03:36:58.409844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.248 [2024-06-11 03:36:58.409852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.248 [2024-06-11 03:36:58.409859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.248 [2024-06-11 03:36:58.409867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.248 [2024-06-11 03:36:58.409873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.248 [2024-06-11 03:36:58.409881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.248 [2024-06-11 03:36:58.409888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.248 [2024-06-11 03:36:58.409896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.248 [2024-06-11 03:36:58.409903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.248 [2024-06-11 03:36:58.409910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.248 [2024-06-11 03:36:58.409917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.248 [2024-06-11 03:36:58.409925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.248 [2024-06-11 03:36:58.409931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.248 [2024-06-11 03:36:58.409939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:17.248 [2024-06-11 03:36:58.409946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.248 [2024-06-11 03:36:58.409953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b2bf0 is same with the state(5) to be set 00:40:17.248 [2024-06-11 03:36:58.410003] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19b2bf0 was disconnected and freed. reset controller. 00:40:17.248 [2024-06-11 03:36:58.410905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:40:17.248 task offset: 90112 on job bdev=Nvme0n1 fails 00:40:17.248 00:40:17.248 Latency(us) 00:40:17.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:17.248 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:17.248 Job: Nvme0n1 ended in about 0.41 seconds with error 00:40:17.248 Verification LBA range: start 0x0 length 0x400 00:40:17.248 Nvme0n1 : 0.41 1721.09 107.57 156.46 0.00 33212.38 6772.05 29459.99 00:40:17.248 =================================================================================================================== 00:40:17.248 Total : 1721.09 107.57 156.46 0.00 33212.38 6772.05 29459.99 00:40:17.248 [2024-06-11 03:36:58.412465] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:17.248 [2024-06-11 03:36:58.412482] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b87c0 (9): Bad file descriptor 00:40:17.248 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.248 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:17.248 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:40:17.248 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:17.248 [2024-06-11 03:36:58.420017] ctrlr.c: 820:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:40:17.248 [2024-06-11 03:36:58.420132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:40:17.248 [2024-06-11 03:36:58.420156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:17.248 [2024-06-11 03:36:58.420168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:40:17.248 [2024-06-11 03:36:58.420175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:40:17.248 [2024-06-11 03:36:58.420181] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:17.248 [2024-06-11 03:36:58.420188] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19b87c0 00:40:17.248 [2024-06-11 03:36:58.420207] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b87c0 (9): Bad file descriptor 00:40:17.248 [2024-06-11 03:36:58.420218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:17.248 [2024-06-11 03:36:58.420224] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:40:17.248 [2024-06-11 03:36:58.420232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:17.248 [2024-06-11 03:36:58.420244] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:40:17.248 03:36:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:40:17.248 03:36:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:40:18.185 03:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2135362 00:40:18.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2135362) - No such process 00:40:18.185 03:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:40:18.185 03:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:40:18.185 03:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:40:18.185 03:36:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:40:18.185 03:36:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:40:18.185 03:36:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:40:18.185 03:36:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:18.185 03:36:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:18.185 { 00:40:18.185 "params": { 00:40:18.185 "name": "Nvme$subsystem", 00:40:18.185 "trtype": "$TEST_TRANSPORT", 00:40:18.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:18.185 "adrfam": "ipv4", 00:40:18.185 "trsvcid": "$NVMF_PORT", 00:40:18.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:18.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:18.185 "hdgst": ${hdgst:-false}, 00:40:18.185 "ddgst": ${ddgst:-false} 00:40:18.185 }, 00:40:18.185 "method": "bdev_nvme_attach_controller" 00:40:18.185 } 00:40:18.185 EOF 00:40:18.185 )") 00:40:18.185 03:36:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:40:18.185 03:36:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:40:18.185 03:36:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:40:18.185 03:36:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:18.185 "params": { 00:40:18.185 "name": "Nvme0", 00:40:18.185 "trtype": "tcp", 00:40:18.185 "traddr": "10.0.0.2", 00:40:18.185 "adrfam": "ipv4", 00:40:18.185 "trsvcid": "4420", 00:40:18.185 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:18.185 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:18.185 "hdgst": false, 00:40:18.185 "ddgst": false 00:40:18.185 }, 00:40:18.185 "method": "bdev_nvme_attach_controller" 00:40:18.185 }' 00:40:18.185 [2024-06-11 03:36:59.476157] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:40:18.185 [2024-06-11 03:36:59.476211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2135791 ] 00:40:18.185 EAL: No free 2048 kB hugepages reported on node 1 00:40:18.185 [2024-06-11 03:36:59.535397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:18.185 [2024-06-11 03:36:59.574038] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:18.443 Running I/O for 1 seconds... 00:40:19.379 00:40:19.379 Latency(us) 00:40:19.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:19.379 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:19.379 Verification LBA range: start 0x0 length 0x400 00:40:19.379 Nvme0n1 : 1.01 1858.66 116.17 0.00 0.00 33821.58 1412.14 28586.18 00:40:19.379 =================================================================================================================== 00:40:19.379 Total : 1858.66 116.17 0.00 0.00 33821.58 1412.14 28586.18 00:40:19.636 03:37:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:40:19.636 03:37:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:40:19.636 03:37:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:40:19.636 03:37:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:19.636 03:37:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:40:19.636 03:37:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:19.636 03:37:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:40:19.636 03:37:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:19.636 03:37:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:40:19.636 03:37:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:19.636 03:37:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:19.636 rmmod nvme_tcp 00:40:19.636 rmmod nvme_fabrics 00:40:19.636 rmmod nvme_keyring 00:40:19.636 03:37:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:19.636 03:37:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:40:19.636 03:37:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:40:19.636 03:37:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2135316 ']' 00:40:19.636 03:37:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2135316 00:40:19.636 03:37:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 2135316 ']' 00:40:19.636 03:37:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 2135316 00:40:19.636 03:37:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:40:19.636 03:37:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:19.636 03:37:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2135316 00:40:19.895 03:37:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:40:19.895 03:37:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:40:19.895 03:37:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2135316' 00:40:19.895 killing process with pid 2135316 00:40:19.895 03:37:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 2135316 00:40:19.895 03:37:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 2135316 00:40:19.895 [2024-06-11 03:37:01.220563] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:40:19.895 03:37:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:19.895 03:37:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:19.895 03:37:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:19.895 03:37:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:19.895 03:37:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:19.895 03:37:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:19.895 03:37:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:19.895 03:37:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:22.455 03:37:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:22.455 03:37:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:40:22.455 00:40:22.455 real 0m12.545s 00:40:22.455 user 0m19.734s 00:40:22.455 sys 0m5.721s 00:40:22.455 03:37:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:22.455 03:37:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:22.455 ************************************ 00:40:22.455 END TEST nvmf_host_management 00:40:22.455 ************************************ 00:40:22.455 03:37:03 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:40:22.455 03:37:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:40:22.455 03:37:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:22.455 03:37:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:22.455 ************************************ 00:40:22.455 START TEST nvmf_lvol 00:40:22.455 ************************************ 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:40:22.456 * Looking for test storage... 00:40:22.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:40:22.456 03:37:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:29.019 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:40:29.020 Found 0000:86:00.0 (0x8086 - 0x159b) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:40:29.020 Found 0000:86:00.1 (0x8086 - 0x159b) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:40:29.020 Found net devices under 0000:86:00.0: cvl_0_0 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:40:29.020 Found net devices under 0000:86:00.1: cvl_0_1 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:29.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:29.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:40:29.020 00:40:29.020 --- 10.0.0.2 ping statistics --- 00:40:29.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:29.020 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:29.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:29.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:40:29.020 00:40:29.020 --- 10.0.0.1 ping statistics --- 00:40:29.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:29.020 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2139868 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2139868 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 2139868 ']' 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:29.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:29.020 [2024-06-11 03:37:09.616760] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:40:29.020 [2024-06-11 03:37:09.616803] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:29.020 EAL: No free 2048 kB hugepages reported on node 1 00:40:29.020 [2024-06-11 03:37:09.679080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:29.020 [2024-06-11 03:37:09.719716] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:29.020 [2024-06-11 03:37:09.719756] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:29.020 [2024-06-11 03:37:09.719763] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:29.020 [2024-06-11 03:37:09.719769] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:29.020 [2024-06-11 03:37:09.719774] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:29.020 [2024-06-11 03:37:09.719812] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:40:29.020 [2024-06-11 03:37:09.719828] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:40:29.020 [2024-06-11 03:37:09.719830] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:29.020 03:37:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:29.020 [2024-06-11 03:37:09.992700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:29.021 03:37:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:29.021 03:37:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:40:29.021 03:37:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:29.021 03:37:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:40:29.021 03:37:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:40:29.279 03:37:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:40:29.538 03:37:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=68727740-d65a-4fba-b420-2a9207f8faf6 00:40:29.538 03:37:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 68727740-d65a-4fba-b420-2a9207f8faf6 lvol 20 00:40:29.796 03:37:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=131a0f96-b1a9-4b7e-b164-da305f8406a0 00:40:29.796 03:37:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:29.796 03:37:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 131a0f96-b1a9-4b7e-b164-da305f8406a0 00:40:30.054 03:37:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:30.312 [2024-06-11 03:37:11.461255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:30.312 03:37:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:30.312 03:37:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2140140 00:40:30.312 03:37:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:40:30.312 03:37:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:40:30.312 EAL: No free 2048 kB hugepages reported on node 1 00:40:31.686 03:37:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 131a0f96-b1a9-4b7e-b164-da305f8406a0 MY_SNAPSHOT 00:40:31.686 03:37:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1d2e4fe6-1a32-4994-ad9c-6810867ac7e2 00:40:31.686 03:37:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 131a0f96-b1a9-4b7e-b164-da305f8406a0 30 00:40:31.944 03:37:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1d2e4fe6-1a32-4994-ad9c-6810867ac7e2 MY_CLONE 00:40:32.203 03:37:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0390ac21-db70-443b-aab9-bcd2f480b7fe 00:40:32.203 03:37:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0390ac21-db70-443b-aab9-bcd2f480b7fe 00:40:32.773 03:37:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2140140 00:40:40.889 Initializing NVMe Controllers 00:40:40.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:40:40.889 Controller IO queue size 128, less than required. 00:40:40.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:40.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:40:40.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:40:40.889 Initialization complete. Launching workers. 00:40:40.889 ======================================================== 00:40:40.889 Latency(us) 00:40:40.889 Device Information : IOPS MiB/s Average min max 00:40:40.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12096.00 47.25 10586.15 1043.96 54558.39 00:40:40.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11944.10 46.66 10718.00 3571.36 47338.08 00:40:40.889 ======================================================== 00:40:40.889 Total : 24040.10 93.91 10651.66 1043.96 54558.39 00:40:40.889 00:40:40.889 03:37:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:41.148 03:37:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 131a0f96-b1a9-4b7e-b164-da305f8406a0 00:40:41.148 03:37:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 68727740-d65a-4fba-b420-2a9207f8faf6 00:40:41.406 03:37:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:40:41.406 03:37:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:40:41.406 03:37:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:40:41.406 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:41.406 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:40:41.406 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:41.406 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:40:41.406 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:41.406 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:41.406 rmmod nvme_tcp 00:40:41.406 rmmod nvme_fabrics 00:40:41.406 rmmod nvme_keyring 00:40:41.406 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:41.406 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:40:41.406 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:40:41.406 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2139868 ']' 00:40:41.406 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2139868 00:40:41.407 03:37:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 2139868 ']' 00:40:41.407 03:37:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 2139868 00:40:41.407 03:37:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:40:41.407 03:37:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:40:41.407 03:37:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2139868 00:40:41.407 03:37:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:40:41.407 03:37:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:40:41.407 03:37:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2139868' 00:40:41.407 killing process with pid 2139868 00:40:41.407 03:37:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 2139868 00:40:41.407 03:37:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 2139868 00:40:41.665 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:41.665 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:41.665 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:41.665 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:41.665 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:41.665 03:37:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:41.665 03:37:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:41.665 03:37:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:44.200 00:40:44.200 real 0m21.670s 00:40:44.200 user 1m2.447s 00:40:44.200 sys 0m7.108s 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:44.200 ************************************ 00:40:44.200 END TEST nvmf_lvol 00:40:44.200 ************************************ 00:40:44.200 03:37:25 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:40:44.200 03:37:25 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:40:44.200 03:37:25 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:44.200 03:37:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:44.200 ************************************ 00:40:44.200 START TEST nvmf_lvs_grow 00:40:44.200 ************************************ 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:40:44.200 * Looking for test storage... 00:40:44.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:44.200 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:40:44.201 03:37:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:40:50.763 Found 0000:86:00.0 (0x8086 - 0x159b) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:40:50.763 Found 0000:86:00.1 (0x8086 - 0x159b) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:40:50.763 Found net devices under 0000:86:00.0: cvl_0_0 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:50.763 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:40:50.764 Found net devices under 0000:86:00.1: cvl_0_1 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:40:50.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:50.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:40:50.764 00:40:50.764 --- 10.0.0.2 ping statistics --- 00:40:50.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:50.764 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:50.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:50.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:40:50.764 00:40:50.764 --- 10.0.0.1 ping statistics --- 00:40:50.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:50.764 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2145802 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2145802 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 2145802 ']' 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:50.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:50.764 [2024-06-11 03:37:31.569079] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:40:50.764 [2024-06-11 03:37:31.569121] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:50.764 EAL: No free 2048 kB hugepages reported on node 1 00:40:50.764 [2024-06-11 03:37:31.630953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:50.764 [2024-06-11 03:37:31.670351] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:50.764 [2024-06-11 03:37:31.670393] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:50.764 [2024-06-11 03:37:31.670400] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:50.764 [2024-06-11 03:37:31.670405] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:50.764 [2024-06-11 03:37:31.670410] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:50.764 [2024-06-11 03:37:31.670444] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:50.764 [2024-06-11 03:37:31.943451] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:50.764 ************************************ 00:40:50.764 START TEST lvs_grow_clean 00:40:50.764 ************************************ 00:40:50.764 03:37:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:40:50.764 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:50.764 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:50.764 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:50.764 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:50.764 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:50.764 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:50.764 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:50.764 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:50.764 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:51.022 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:51.022 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:51.022 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a75039c9-fba8-462c-9c1f-e989373f4313 00:40:51.022 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a75039c9-fba8-462c-9c1f-e989373f4313 00:40:51.022 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:51.279 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:51.279 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:51.279 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a75039c9-fba8-462c-9c1f-e989373f4313 lvol 150 00:40:51.536 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=bd1b7e94-fa49-45ac-95c7-c845e4b4f3d9 00:40:51.536 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:51.536 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:51.537 [2024-06-11 03:37:32.868707] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:51.537 [2024-06-11 03:37:32.868756] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:51.537 true 00:40:51.537 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a75039c9-fba8-462c-9c1f-e989373f4313 00:40:51.537 03:37:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:51.793 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:51.793 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:52.049 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bd1b7e94-fa49-45ac-95c7-c845e4b4f3d9 00:40:52.049 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:52.305 [2024-06-11 03:37:33.534702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:52.305 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:52.563 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2146279 00:40:52.563 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:52.563 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2146279 /var/tmp/bdevperf.sock 00:40:52.563 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:52.563 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 2146279 ']' 00:40:52.563 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:52.563 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:40:52.563 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:52.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:52.563 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:40:52.563 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:52.563 [2024-06-11 03:37:33.748321] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:40:52.563 [2024-06-11 03:37:33.748364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146279 ] 00:40:52.563 EAL: No free 2048 kB hugepages reported on node 1 00:40:52.563 [2024-06-11 03:37:33.806028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:52.563 [2024-06-11 03:37:33.845383] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:40:52.563 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:40:52.563 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:40:52.563 03:37:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:52.857 Nvme0n1 00:40:52.857 03:37:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:53.115 [ 00:40:53.115 { 00:40:53.115 "name": "Nvme0n1", 00:40:53.115 "aliases": [ 00:40:53.115 "bd1b7e94-fa49-45ac-95c7-c845e4b4f3d9" 00:40:53.115 ], 00:40:53.115 "product_name": "NVMe disk", 00:40:53.115 "block_size": 4096, 00:40:53.115 "num_blocks": 38912, 00:40:53.115 "uuid": "bd1b7e94-fa49-45ac-95c7-c845e4b4f3d9", 00:40:53.115 "assigned_rate_limits": { 00:40:53.115 "rw_ios_per_sec": 0, 00:40:53.115 "rw_mbytes_per_sec": 0, 00:40:53.115 "r_mbytes_per_sec": 0, 00:40:53.115 "w_mbytes_per_sec": 0 00:40:53.115 }, 00:40:53.115 "claimed": false, 00:40:53.115 "zoned": false, 00:40:53.115 "supported_io_types": { 00:40:53.115 "read": true, 00:40:53.115 "write": true, 00:40:53.115 "unmap": true, 00:40:53.115 "write_zeroes": true, 00:40:53.115 "flush": true, 00:40:53.115 "reset": true, 00:40:53.115 "compare": true, 00:40:53.115 "compare_and_write": true, 00:40:53.115 "abort": true, 00:40:53.115 "nvme_admin": true, 00:40:53.115 "nvme_io": true 00:40:53.115 }, 00:40:53.115 "memory_domains": [ 00:40:53.115 { 00:40:53.115 "dma_device_id": "system", 00:40:53.115 "dma_device_type": 1 00:40:53.115 } 00:40:53.115 ], 00:40:53.115 "driver_specific": { 00:40:53.115 "nvme": [ 00:40:53.115 { 00:40:53.115 "trid": { 00:40:53.115 "trtype": "TCP", 00:40:53.115 "adrfam": "IPv4", 00:40:53.115 "traddr": "10.0.0.2", 00:40:53.115 "trsvcid": "4420", 00:40:53.115 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:53.115 }, 00:40:53.115 "ctrlr_data": { 00:40:53.115 "cntlid": 1, 00:40:53.115 "vendor_id": "0x8086", 00:40:53.115 "model_number": "SPDK bdev Controller", 00:40:53.115 "serial_number": "SPDK0", 00:40:53.115 "firmware_revision": "24.09", 00:40:53.115 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:53.116 "oacs": { 00:40:53.116 "security": 0, 00:40:53.116 "format": 0, 00:40:53.116 "firmware": 0, 00:40:53.116 "ns_manage": 0 00:40:53.116 }, 00:40:53.116 "multi_ctrlr": true, 00:40:53.116 "ana_reporting": false 00:40:53.116 }, 00:40:53.116 "vs": { 00:40:53.116 "nvme_version": "1.3" 00:40:53.116 }, 00:40:53.116 "ns_data": { 00:40:53.116 "id": 1, 00:40:53.116 "can_share": true 00:40:53.116 } 00:40:53.116 } 00:40:53.116 ], 00:40:53.116 "mp_policy": "active_passive" 00:40:53.116 } 00:40:53.116 } 00:40:53.116 ] 00:40:53.116 03:37:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2146352 00:40:53.116 03:37:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:53.116 03:37:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:53.116 Running I/O for 10 seconds... 00:40:54.487 Latency(us) 00:40:54.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:54.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:54.488 Nvme0n1 : 1.00 22678.00 88.59 0.00 0.00 0.00 0.00 0.00 00:40:54.488 =================================================================================================================== 00:40:54.488 Total : 22678.00 88.59 0.00 0.00 0.00 0.00 0.00 00:40:54.488 00:40:55.053 03:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a75039c9-fba8-462c-9c1f-e989373f4313 00:40:55.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:55.311 Nvme0n1 : 2.00 22839.00 89.21 0.00 0.00 0.00 0.00 0.00 00:40:55.311 =================================================================================================================== 00:40:55.311 Total : 22839.00 89.21 0.00 0.00 0.00 0.00 0.00 00:40:55.311 00:40:55.311 true 00:40:55.311 03:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a75039c9-fba8-462c-9c1f-e989373f4313 00:40:55.311 03:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:55.569 03:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:55.569 03:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:55.569 03:37:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2146352 00:40:56.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:56.134 Nvme0n1 : 3.00 22895.33 89.43 0.00 0.00 0.00 0.00 0.00 00:40:56.134 =================================================================================================================== 00:40:56.134 Total : 22895.33 89.43 0.00 0.00 0.00 0.00 0.00 00:40:56.134 00:40:57.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:57.506 Nvme0n1 : 4.00 22975.50 89.75 0.00 0.00 0.00 0.00 0.00 00:40:57.506 =================================================================================================================== 00:40:57.506 Total : 22975.50 89.75 0.00 0.00 0.00 0.00 0.00 00:40:57.506 00:40:58.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:58.440 Nvme0n1 : 5.00 23026.80 89.95 0.00 0.00 0.00 0.00 0.00 00:40:58.440 =================================================================================================================== 00:40:58.440 Total : 23026.80 89.95 0.00 0.00 0.00 0.00 0.00 00:40:58.440 00:40:59.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:59.373 Nvme0n1 : 6.00 23014.33 89.90 0.00 0.00 0.00 0.00 0.00 00:40:59.373 =================================================================================================================== 00:40:59.373 Total : 23014.33 89.90 0.00 0.00 0.00 0.00 0.00 00:40:59.373 00:41:00.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:00.305 Nvme0n1 : 7.00 23050.00 90.04 0.00 0.00 0.00 0.00 0.00 00:41:00.305 =================================================================================================================== 00:41:00.305 Total : 23050.00 90.04 0.00 0.00 0.00 0.00 0.00 00:41:00.305 00:41:01.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:01.235 Nvme0n1 : 8.00 23084.75 90.17 0.00 0.00 0.00 0.00 0.00 00:41:01.235 =================================================================================================================== 00:41:01.235 Total : 23084.75 90.17 0.00 0.00 0.00 0.00 0.00 00:41:01.235 00:41:02.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:02.167 Nvme0n1 : 9.00 23109.11 90.27 0.00 0.00 0.00 0.00 0.00 00:41:02.167 =================================================================================================================== 00:41:02.167 Total : 23109.11 90.27 0.00 0.00 0.00 0.00 0.00 00:41:02.167 00:41:03.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:03.537 Nvme0n1 : 10.00 23128.60 90.35 0.00 0.00 0.00 0.00 0.00 00:41:03.537 =================================================================================================================== 00:41:03.537 Total : 23128.60 90.35 0.00 0.00 0.00 0.00 0.00 00:41:03.537 00:41:03.537 00:41:03.537 Latency(us) 00:41:03.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:03.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:03.537 Nvme0n1 : 10.01 23128.66 90.35 0.00 0.00 5530.40 4244.24 15478.98 00:41:03.537 =================================================================================================================== 00:41:03.537 Total : 23128.66 90.35 0.00 0.00 5530.40 4244.24 15478.98 00:41:03.537 0 00:41:03.537 03:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2146279 00:41:03.537 03:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 2146279 ']' 00:41:03.537 03:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 2146279 00:41:03.537 03:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:41:03.537 03:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:03.537 03:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2146279 00:41:03.537 03:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:41:03.537 03:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:41:03.537 03:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2146279' 00:41:03.537 killing process with pid 2146279 00:41:03.537 03:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 2146279 00:41:03.537 Received shutdown signal, test time was about 10.000000 seconds 00:41:03.537 00:41:03.537 Latency(us) 00:41:03.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:03.537 =================================================================================================================== 00:41:03.537 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:03.537 03:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 2146279 00:41:03.537 03:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:03.537 03:37:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:03.795 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a75039c9-fba8-462c-9c1f-e989373f4313 00:41:03.795 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:04.052 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:04.052 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:41:04.052 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:04.052 [2024-06-11 03:37:45.408314] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:04.052 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a75039c9-fba8-462c-9c1f-e989373f4313 00:41:04.052 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:41:04.052 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a75039c9-fba8-462c-9c1f-e989373f4313 00:41:04.052 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:04.052 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:04.052 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:04.052 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:04.052 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:04.052 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:04.052 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:04.052 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:04.052 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a75039c9-fba8-462c-9c1f-e989373f4313 00:41:04.310 request: 00:41:04.310 { 00:41:04.310 "uuid": "a75039c9-fba8-462c-9c1f-e989373f4313", 00:41:04.310 "method": "bdev_lvol_get_lvstores", 00:41:04.310 "req_id": 1 00:41:04.310 } 00:41:04.310 Got JSON-RPC error response 00:41:04.310 response: 00:41:04.310 { 00:41:04.310 "code": -19, 00:41:04.310 "message": "No such device" 00:41:04.310 } 00:41:04.310 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:41:04.310 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:41:04.310 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:41:04.310 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:41:04.310 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:04.567 aio_bdev 00:41:04.567 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bd1b7e94-fa49-45ac-95c7-c845e4b4f3d9 00:41:04.567 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=bd1b7e94-fa49-45ac-95c7-c845e4b4f3d9 00:41:04.567 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:41:04.567 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:41:04.568 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:41:04.568 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:41:04.568 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:04.568 03:37:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bd1b7e94-fa49-45ac-95c7-c845e4b4f3d9 -t 2000 00:41:04.825 [ 00:41:04.825 { 00:41:04.825 "name": "bd1b7e94-fa49-45ac-95c7-c845e4b4f3d9", 00:41:04.825 "aliases": [ 00:41:04.825 "lvs/lvol" 00:41:04.825 ], 00:41:04.825 "product_name": "Logical Volume", 00:41:04.825 "block_size": 4096, 00:41:04.825 "num_blocks": 38912, 00:41:04.825 "uuid": "bd1b7e94-fa49-45ac-95c7-c845e4b4f3d9", 00:41:04.825 "assigned_rate_limits": { 00:41:04.825 "rw_ios_per_sec": 0, 00:41:04.825 "rw_mbytes_per_sec": 0, 00:41:04.825 "r_mbytes_per_sec": 0, 00:41:04.825 "w_mbytes_per_sec": 0 00:41:04.825 }, 00:41:04.825 "claimed": false, 00:41:04.825 "zoned": false, 00:41:04.825 "supported_io_types": { 00:41:04.825 "read": true, 00:41:04.825 "write": true, 00:41:04.825 "unmap": true, 00:41:04.825 "write_zeroes": true, 00:41:04.825 "flush": false, 00:41:04.825 "reset": true, 00:41:04.825 "compare": false, 00:41:04.825 "compare_and_write": false, 00:41:04.825 "abort": false, 00:41:04.825 "nvme_admin": false, 00:41:04.825 "nvme_io": false 00:41:04.825 }, 00:41:04.825 "driver_specific": { 00:41:04.825 "lvol": { 00:41:04.825 "lvol_store_uuid": "a75039c9-fba8-462c-9c1f-e989373f4313", 00:41:04.825 "base_bdev": "aio_bdev", 00:41:04.825 "thin_provision": false, 00:41:04.825 "num_allocated_clusters": 38, 00:41:04.825 "snapshot": false, 00:41:04.825 "clone": false, 00:41:04.825 "esnap_clone": false 00:41:04.825 } 00:41:04.825 } 00:41:04.825 } 00:41:04.825 ] 00:41:04.825 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:41:04.825 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a75039c9-fba8-462c-9c1f-e989373f4313 00:41:04.825 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:05.083 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:05.083 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a75039c9-fba8-462c-9c1f-e989373f4313 00:41:05.083 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:05.083 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:05.083 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bd1b7e94-fa49-45ac-95c7-c845e4b4f3d9 00:41:05.342 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a75039c9-fba8-462c-9c1f-e989373f4313 00:41:05.599 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:05.599 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:05.599 00:41:05.599 real 0m14.939s 00:41:05.599 user 0m14.449s 00:41:05.599 sys 0m1.435s 00:41:05.599 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:05.599 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:05.599 ************************************ 00:41:05.599 END TEST lvs_grow_clean 00:41:05.599 ************************************ 00:41:05.599 03:37:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:41:05.599 03:37:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:41:05.599 03:37:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:05.599 03:37:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:05.599 ************************************ 00:41:05.599 START TEST lvs_grow_dirty 00:41:05.599 ************************************ 00:41:05.599 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:41:05.599 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:05.599 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:05.599 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:05.599 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:05.599 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:05.599 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:05.599 03:37:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:05.599 03:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:05.857 03:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:05.857 03:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:05.857 03:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:06.114 03:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ac96d5eb-6156-4445-b37e-38b3437d208a 00:41:06.114 03:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac96d5eb-6156-4445-b37e-38b3437d208a 00:41:06.114 03:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:06.371 03:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:06.371 03:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:06.371 03:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ac96d5eb-6156-4445-b37e-38b3437d208a lvol 150 00:41:06.371 03:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1b2ced0a-683b-461e-b378-a1f112bd0328 00:41:06.371 03:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:06.371 03:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:06.628 [2024-06-11 03:37:47.854535] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:06.628 [2024-06-11 03:37:47.854584] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:06.628 true 00:41:06.628 03:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:06.628 03:37:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac96d5eb-6156-4445-b37e-38b3437d208a 00:41:06.886 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:06.886 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:06.886 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1b2ced0a-683b-461e-b378-a1f112bd0328 00:41:07.144 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:07.144 [2024-06-11 03:37:48.512484] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:07.144 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:07.401 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2148771 00:41:07.401 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:07.401 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:07.401 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2148771 /var/tmp/bdevperf.sock 00:41:07.401 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 2148771 ']' 00:41:07.401 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:07.401 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:07.401 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:07.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:07.401 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:07.401 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:07.401 [2024-06-11 03:37:48.732205] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:41:07.401 [2024-06-11 03:37:48.732254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148771 ] 00:41:07.401 EAL: No free 2048 kB hugepages reported on node 1 00:41:07.401 [2024-06-11 03:37:48.791050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.658 [2024-06-11 03:37:48.832444] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:41:07.658 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:07.659 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:41:07.659 03:37:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:07.916 Nvme0n1 00:41:07.916 03:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:08.173 [ 00:41:08.173 { 00:41:08.173 "name": "Nvme0n1", 00:41:08.173 "aliases": [ 00:41:08.173 "1b2ced0a-683b-461e-b378-a1f112bd0328" 00:41:08.173 ], 00:41:08.173 "product_name": "NVMe disk", 00:41:08.173 "block_size": 4096, 00:41:08.173 "num_blocks": 38912, 00:41:08.173 "uuid": "1b2ced0a-683b-461e-b378-a1f112bd0328", 00:41:08.173 "assigned_rate_limits": { 00:41:08.173 "rw_ios_per_sec": 0, 00:41:08.173 "rw_mbytes_per_sec": 0, 00:41:08.173 "r_mbytes_per_sec": 0, 00:41:08.173 "w_mbytes_per_sec": 0 00:41:08.173 }, 00:41:08.173 "claimed": false, 00:41:08.173 "zoned": false, 00:41:08.173 "supported_io_types": { 00:41:08.173 "read": true, 00:41:08.173 "write": true, 00:41:08.173 "unmap": true, 00:41:08.173 "write_zeroes": true, 00:41:08.173 "flush": true, 00:41:08.173 "reset": true, 00:41:08.173 "compare": true, 00:41:08.173 "compare_and_write": true, 00:41:08.173 "abort": true, 00:41:08.173 "nvme_admin": true, 00:41:08.173 "nvme_io": true 00:41:08.173 }, 00:41:08.173 "memory_domains": [ 00:41:08.173 { 00:41:08.173 "dma_device_id": "system", 00:41:08.173 "dma_device_type": 1 00:41:08.173 } 00:41:08.173 ], 00:41:08.173 "driver_specific": { 00:41:08.173 "nvme": [ 00:41:08.173 { 00:41:08.173 "trid": { 00:41:08.173 "trtype": "TCP", 00:41:08.173 "adrfam": "IPv4", 00:41:08.173 "traddr": "10.0.0.2", 00:41:08.173 "trsvcid": "4420", 00:41:08.173 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:08.173 }, 00:41:08.173 "ctrlr_data": { 00:41:08.173 "cntlid": 1, 00:41:08.173 "vendor_id": "0x8086", 00:41:08.173 "model_number": "SPDK bdev Controller", 00:41:08.173 "serial_number": "SPDK0", 00:41:08.173 "firmware_revision": "24.09", 00:41:08.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:08.173 "oacs": { 00:41:08.173 "security": 0, 00:41:08.173 "format": 0, 00:41:08.173 "firmware": 0, 00:41:08.173 "ns_manage": 0 00:41:08.173 }, 00:41:08.173 "multi_ctrlr": true, 00:41:08.173 "ana_reporting": false 00:41:08.173 }, 00:41:08.173 "vs": { 00:41:08.173 "nvme_version": "1.3" 00:41:08.173 }, 00:41:08.173 "ns_data": { 00:41:08.173 "id": 1, 00:41:08.173 "can_share": true 00:41:08.173 } 00:41:08.173 } 00:41:08.173 ], 00:41:08.173 "mp_policy": "active_passive" 00:41:08.173 } 00:41:08.173 } 00:41:08.173 ] 00:41:08.173 03:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2148870 00:41:08.173 03:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:08.173 03:37:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:08.173 Running I/O for 10 seconds... 00:41:09.105 Latency(us) 00:41:09.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:09.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:09.105 Nvme0n1 : 1.00 22366.00 87.37 0.00 0.00 0.00 0.00 0.00 00:41:09.105 =================================================================================================================== 00:41:09.105 Total : 22366.00 87.37 0.00 0.00 0.00 0.00 0.00 00:41:09.105 00:41:10.037 03:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ac96d5eb-6156-4445-b37e-38b3437d208a 00:41:10.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:10.295 Nvme0n1 : 2.00 22687.00 88.62 0.00 0.00 0.00 0.00 0.00 00:41:10.295 =================================================================================================================== 00:41:10.295 Total : 22687.00 88.62 0.00 0.00 0.00 0.00 0.00 00:41:10.295 00:41:10.295 true 00:41:10.295 03:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac96d5eb-6156-4445-b37e-38b3437d208a 00:41:10.295 03:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:10.295 03:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:10.552 03:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:10.552 03:37:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2148870 00:41:11.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:11.117 Nvme0n1 : 3.00 22780.67 88.99 0.00 0.00 0.00 0.00 0.00 00:41:11.117 =================================================================================================================== 00:41:11.117 Total : 22780.67 88.99 0.00 0.00 0.00 0.00 0.00 00:41:11.117 00:41:12.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:12.138 Nvme0n1 : 4.00 22851.50 89.26 0.00 0.00 0.00 0.00 0.00 00:41:12.138 =================================================================================================================== 00:41:12.138 Total : 22851.50 89.26 0.00 0.00 0.00 0.00 0.00 00:41:12.138 00:41:13.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:13.075 Nvme0n1 : 5.00 22922.80 89.54 0.00 0.00 0.00 0.00 0.00 00:41:13.075 =================================================================================================================== 00:41:13.075 Total : 22922.80 89.54 0.00 0.00 0.00 0.00 0.00 00:41:13.075 00:41:14.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:14.453 Nvme0n1 : 6.00 22973.00 89.74 0.00 0.00 0.00 0.00 0.00 00:41:14.453 =================================================================================================================== 00:41:14.453 Total : 22973.00 89.74 0.00 0.00 0.00 0.00 0.00 00:41:14.453 00:41:15.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:15.390 Nvme0n1 : 7.00 23005.43 89.86 0.00 0.00 0.00 0.00 0.00 00:41:15.390 =================================================================================================================== 00:41:15.390 Total : 23005.43 89.86 0.00 0.00 0.00 0.00 0.00 00:41:15.390 00:41:16.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:16.327 Nvme0n1 : 8.00 23036.75 89.99 0.00 0.00 0.00 0.00 0.00 00:41:16.327 =================================================================================================================== 00:41:16.327 Total : 23036.75 89.99 0.00 0.00 0.00 0.00 0.00 00:41:16.327 00:41:17.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:17.266 Nvme0n1 : 9.00 23063.78 90.09 0.00 0.00 0.00 0.00 0.00 00:41:17.266 =================================================================================================================== 00:41:17.266 Total : 23063.78 90.09 0.00 0.00 0.00 0.00 0.00 00:41:17.266 00:41:18.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:18.203 Nvme0n1 : 10.00 23088.60 90.19 0.00 0.00 0.00 0.00 0.00 00:41:18.203 =================================================================================================================== 00:41:18.203 Total : 23088.60 90.19 0.00 0.00 0.00 0.00 0.00 00:41:18.203 00:41:18.203 00:41:18.203 Latency(us) 00:41:18.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:18.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:18.203 Nvme0n1 : 10.01 23088.24 90.19 0.00 0.00 5539.97 4244.24 14979.66 00:41:18.203 =================================================================================================================== 00:41:18.203 Total : 23088.24 90.19 0.00 0.00 5539.97 4244.24 14979.66 00:41:18.203 0 00:41:18.203 03:37:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2148771 00:41:18.203 03:37:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 2148771 ']' 00:41:18.203 03:37:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 2148771 00:41:18.203 03:37:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:41:18.203 03:37:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:18.203 03:37:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2148771 00:41:18.203 03:37:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:41:18.203 03:37:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:41:18.203 03:37:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2148771' 00:41:18.203 killing process with pid 2148771 00:41:18.203 03:37:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 2148771 00:41:18.203 Received shutdown signal, test time was about 10.000000 seconds 00:41:18.203 00:41:18.203 Latency(us) 00:41:18.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:18.203 =================================================================================================================== 00:41:18.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:18.203 03:37:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 2148771 00:41:18.462 03:37:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:18.462 03:37:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:18.721 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:18.721 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac96d5eb-6156-4445-b37e-38b3437d208a 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2145802 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2145802 00:41:18.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2145802 Killed "${NVMF_APP[@]}" "$@" 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2150711 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2150711 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 2150711 ']' 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:18.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:18.981 03:38:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:41:18.981 [2024-06-11 03:38:00.316543] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:41:18.981 [2024-06-11 03:38:00.316590] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:18.981 EAL: No free 2048 kB hugepages reported on node 1 00:41:18.981 [2024-06-11 03:38:00.379051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:19.240 [2024-06-11 03:38:00.419236] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:19.240 [2024-06-11 03:38:00.419273] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:19.240 [2024-06-11 03:38:00.419279] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:19.240 [2024-06-11 03:38:00.419285] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:19.240 [2024-06-11 03:38:00.419290] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:19.240 [2024-06-11 03:38:00.419322] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:41:19.807 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:19.807 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:41:19.807 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:19.807 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:19.807 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:19.807 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:19.807 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:20.066 [2024-06-11 03:38:01.279507] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:41:20.066 [2024-06-11 03:38:01.279600] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:41:20.067 [2024-06-11 03:38:01.279625] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:41:20.067 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:41:20.067 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1b2ced0a-683b-461e-b378-a1f112bd0328 00:41:20.067 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=1b2ced0a-683b-461e-b378-a1f112bd0328 00:41:20.067 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:41:20.067 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:41:20.067 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:41:20.067 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:41:20.067 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:20.067 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1b2ced0a-683b-461e-b378-a1f112bd0328 -t 2000 00:41:20.326 [ 00:41:20.326 { 00:41:20.326 "name": "1b2ced0a-683b-461e-b378-a1f112bd0328", 00:41:20.326 "aliases": [ 00:41:20.326 "lvs/lvol" 00:41:20.326 ], 00:41:20.326 "product_name": "Logical Volume", 00:41:20.326 "block_size": 4096, 00:41:20.326 "num_blocks": 38912, 00:41:20.326 "uuid": "1b2ced0a-683b-461e-b378-a1f112bd0328", 00:41:20.326 "assigned_rate_limits": { 00:41:20.326 "rw_ios_per_sec": 0, 00:41:20.326 "rw_mbytes_per_sec": 0, 00:41:20.326 "r_mbytes_per_sec": 0, 00:41:20.326 "w_mbytes_per_sec": 0 00:41:20.326 }, 00:41:20.326 "claimed": false, 00:41:20.326 "zoned": false, 00:41:20.326 "supported_io_types": { 00:41:20.326 "read": true, 00:41:20.326 "write": true, 00:41:20.326 "unmap": true, 00:41:20.326 "write_zeroes": true, 00:41:20.326 "flush": false, 00:41:20.326 "reset": true, 00:41:20.326 "compare": false, 00:41:20.326 "compare_and_write": false, 00:41:20.326 "abort": false, 00:41:20.326 "nvme_admin": false, 00:41:20.326 "nvme_io": false 00:41:20.326 }, 00:41:20.326 "driver_specific": { 00:41:20.326 "lvol": { 00:41:20.326 "lvol_store_uuid": "ac96d5eb-6156-4445-b37e-38b3437d208a", 00:41:20.326 "base_bdev": "aio_bdev", 00:41:20.326 "thin_provision": false, 00:41:20.326 "num_allocated_clusters": 38, 00:41:20.326 "snapshot": false, 00:41:20.326 "clone": false, 00:41:20.326 "esnap_clone": false 00:41:20.326 } 00:41:20.326 } 00:41:20.326 } 00:41:20.326 ] 00:41:20.326 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:41:20.326 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac96d5eb-6156-4445-b37e-38b3437d208a 00:41:20.326 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:41:20.585 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:41:20.585 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac96d5eb-6156-4445-b37e-38b3437d208a 00:41:20.585 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:41:20.585 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:41:20.585 03:38:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:20.845 [2024-06-11 03:38:02.116209] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:20.845 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac96d5eb-6156-4445-b37e-38b3437d208a 00:41:20.845 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:41:20.845 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac96d5eb-6156-4445-b37e-38b3437d208a 00:41:20.845 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:20.845 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:20.845 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:20.845 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:20.845 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:20.845 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:41:20.845 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:20.845 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:20.845 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac96d5eb-6156-4445-b37e-38b3437d208a 00:41:21.105 request: 00:41:21.105 { 00:41:21.105 "uuid": "ac96d5eb-6156-4445-b37e-38b3437d208a", 00:41:21.105 "method": "bdev_lvol_get_lvstores", 00:41:21.105 "req_id": 1 00:41:21.105 } 00:41:21.105 Got JSON-RPC error response 00:41:21.105 response: 00:41:21.105 { 00:41:21.105 "code": -19, 00:41:21.105 "message": "No such device" 00:41:21.105 } 00:41:21.105 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:41:21.105 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:41:21.105 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:41:21.105 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:41:21.105 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:21.105 aio_bdev 00:41:21.105 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1b2ced0a-683b-461e-b378-a1f112bd0328 00:41:21.105 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=1b2ced0a-683b-461e-b378-a1f112bd0328 00:41:21.105 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:41:21.105 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:41:21.105 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:41:21.105 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:41:21.105 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:21.363 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1b2ced0a-683b-461e-b378-a1f112bd0328 -t 2000 00:41:21.622 [ 00:41:21.622 { 00:41:21.622 "name": "1b2ced0a-683b-461e-b378-a1f112bd0328", 00:41:21.622 "aliases": [ 00:41:21.622 "lvs/lvol" 00:41:21.622 ], 00:41:21.622 "product_name": "Logical Volume", 00:41:21.622 "block_size": 4096, 00:41:21.622 "num_blocks": 38912, 00:41:21.622 "uuid": "1b2ced0a-683b-461e-b378-a1f112bd0328", 00:41:21.622 "assigned_rate_limits": { 00:41:21.622 "rw_ios_per_sec": 0, 00:41:21.622 "rw_mbytes_per_sec": 0, 00:41:21.622 "r_mbytes_per_sec": 0, 00:41:21.622 "w_mbytes_per_sec": 0 00:41:21.622 }, 00:41:21.622 "claimed": false, 00:41:21.622 "zoned": false, 00:41:21.622 "supported_io_types": { 00:41:21.622 "read": true, 00:41:21.622 "write": true, 00:41:21.622 "unmap": true, 00:41:21.622 "write_zeroes": true, 00:41:21.622 "flush": false, 00:41:21.622 "reset": true, 00:41:21.622 "compare": false, 00:41:21.622 "compare_and_write": false, 00:41:21.622 "abort": false, 00:41:21.622 "nvme_admin": false, 00:41:21.622 "nvme_io": false 00:41:21.622 }, 00:41:21.622 "driver_specific": { 00:41:21.622 "lvol": { 00:41:21.622 "lvol_store_uuid": "ac96d5eb-6156-4445-b37e-38b3437d208a", 00:41:21.622 "base_bdev": "aio_bdev", 00:41:21.622 "thin_provision": false, 00:41:21.622 "num_allocated_clusters": 38, 00:41:21.622 "snapshot": false, 00:41:21.622 "clone": false, 00:41:21.622 "esnap_clone": false 00:41:21.622 } 00:41:21.622 } 00:41:21.622 } 00:41:21.622 ] 00:41:21.622 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:41:21.622 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac96d5eb-6156-4445-b37e-38b3437d208a 00:41:21.622 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:21.622 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:21.622 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac96d5eb-6156-4445-b37e-38b3437d208a 00:41:21.622 03:38:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:21.881 03:38:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:21.881 03:38:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1b2ced0a-683b-461e-b378-a1f112bd0328 00:41:22.139 03:38:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ac96d5eb-6156-4445-b37e-38b3437d208a 00:41:22.139 03:38:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:22.398 03:38:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:22.398 00:41:22.398 real 0m16.690s 00:41:22.398 user 0m41.517s 00:41:22.398 sys 0m4.004s 00:41:22.398 03:38:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:22.398 03:38:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:22.398 ************************************ 00:41:22.398 END TEST lvs_grow_dirty 00:41:22.398 ************************************ 00:41:22.398 03:38:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:41:22.398 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:41:22.398 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:41:22.398 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:41:22.399 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:41:22.399 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:41:22.399 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:41:22.399 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:41:22.399 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:41:22.399 nvmf_trace.0 00:41:22.399 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:41:22.399 03:38:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:41:22.399 03:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:22.399 03:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:41:22.399 03:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:22.399 03:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:41:22.399 03:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:22.399 03:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:22.399 rmmod nvme_tcp 00:41:22.399 rmmod nvme_fabrics 00:41:22.662 rmmod nvme_keyring 00:41:22.662 03:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:22.662 03:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:41:22.662 03:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:41:22.662 03:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2150711 ']' 00:41:22.663 03:38:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2150711 00:41:22.663 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 2150711 ']' 00:41:22.663 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 2150711 00:41:22.663 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:41:22.663 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:22.663 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2150711 00:41:22.663 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:41:22.663 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:41:22.663 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2150711' 00:41:22.663 killing process with pid 2150711 00:41:22.663 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 2150711 00:41:22.663 03:38:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 2150711 00:41:22.663 03:38:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:22.663 03:38:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:22.663 03:38:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:22.663 03:38:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:22.663 03:38:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:22.663 03:38:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:22.663 03:38:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:22.663 03:38:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:25.201 03:38:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:25.201 00:41:25.201 real 0m40.996s 00:41:25.201 user 1m1.701s 00:41:25.201 sys 0m10.519s 00:41:25.201 03:38:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:25.201 03:38:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:25.201 ************************************ 00:41:25.201 END TEST nvmf_lvs_grow 00:41:25.201 ************************************ 00:41:25.201 03:38:06 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:41:25.201 03:38:06 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:41:25.201 03:38:06 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:25.201 03:38:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:25.201 ************************************ 00:41:25.201 START TEST nvmf_bdev_io_wait 00:41:25.201 ************************************ 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:41:25.201 * Looking for test storage... 00:41:25.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:25.201 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:25.202 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:25.202 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:41:25.202 03:38:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:41:31.773 Found 0000:86:00.0 (0x8086 - 0x159b) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:41:31.773 Found 0000:86:00.1 (0x8086 - 0x159b) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:41:31.773 Found net devices under 0000:86:00.0: cvl_0_0 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:41:31.773 Found net devices under 0000:86:00.1: cvl_0_1 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:31.773 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:31.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:31.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:41:31.774 00:41:31.774 --- 10.0.0.2 ping statistics --- 00:41:31.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:31.774 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:31.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:31.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:41:31.774 00:41:31.774 --- 10.0.0.1 ping statistics --- 00:41:31.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:31.774 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2155209 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2155209 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 2155209 ']' 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:31.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:31.774 [2024-06-11 03:38:12.711057] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:41:31.774 [2024-06-11 03:38:12.711099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:31.774 EAL: No free 2048 kB hugepages reported on node 1 00:41:31.774 [2024-06-11 03:38:12.774972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:31.774 [2024-06-11 03:38:12.816227] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:31.774 [2024-06-11 03:38:12.816268] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:31.774 [2024-06-11 03:38:12.816275] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:31.774 [2024-06-11 03:38:12.816282] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:31.774 [2024-06-11 03:38:12.816287] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:31.774 [2024-06-11 03:38:12.816337] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:41:31.774 [2024-06-11 03:38:12.816434] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:41:31.774 [2024-06-11 03:38:12.816502] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:41:31.774 [2024-06-11 03:38:12.816502] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:31.774 [2024-06-11 03:38:12.960539] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:31.774 Malloc0 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.774 03:38:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:31.774 [2024-06-11 03:38:13.015739] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2155297 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2155299 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:31.774 { 00:41:31.774 "params": { 00:41:31.774 "name": "Nvme$subsystem", 00:41:31.774 "trtype": "$TEST_TRANSPORT", 00:41:31.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:31.774 "adrfam": "ipv4", 00:41:31.774 "trsvcid": "$NVMF_PORT", 00:41:31.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:31.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:31.774 "hdgst": ${hdgst:-false}, 00:41:31.774 "ddgst": ${ddgst:-false} 00:41:31.774 }, 00:41:31.774 "method": "bdev_nvme_attach_controller" 00:41:31.774 } 00:41:31.774 EOF 00:41:31.774 )") 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2155301 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:31.774 { 00:41:31.774 "params": { 00:41:31.774 "name": "Nvme$subsystem", 00:41:31.774 "trtype": "$TEST_TRANSPORT", 00:41:31.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:31.774 "adrfam": "ipv4", 00:41:31.774 "trsvcid": "$NVMF_PORT", 00:41:31.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:31.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:31.774 "hdgst": ${hdgst:-false}, 00:41:31.774 "ddgst": ${ddgst:-false} 00:41:31.774 }, 00:41:31.774 "method": "bdev_nvme_attach_controller" 00:41:31.774 } 00:41:31.774 EOF 00:41:31.774 )") 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2155304 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:31.774 { 00:41:31.774 "params": { 00:41:31.774 "name": "Nvme$subsystem", 00:41:31.774 "trtype": "$TEST_TRANSPORT", 00:41:31.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:31.774 "adrfam": "ipv4", 00:41:31.774 "trsvcid": "$NVMF_PORT", 00:41:31.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:31.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:31.774 "hdgst": ${hdgst:-false}, 00:41:31.774 "ddgst": ${ddgst:-false} 00:41:31.774 }, 00:41:31.774 "method": "bdev_nvme_attach_controller" 00:41:31.774 } 00:41:31.774 EOF 00:41:31.774 )") 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:31.774 { 00:41:31.774 "params": { 00:41:31.774 "name": "Nvme$subsystem", 00:41:31.774 "trtype": "$TEST_TRANSPORT", 00:41:31.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:31.774 "adrfam": "ipv4", 00:41:31.774 "trsvcid": "$NVMF_PORT", 00:41:31.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:31.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:31.774 "hdgst": ${hdgst:-false}, 00:41:31.774 "ddgst": ${ddgst:-false} 00:41:31.774 }, 00:41:31.774 "method": "bdev_nvme_attach_controller" 00:41:31.774 } 00:41:31.774 EOF 00:41:31.774 )") 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2155297 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:31.774 "params": { 00:41:31.774 "name": "Nvme1", 00:41:31.774 "trtype": "tcp", 00:41:31.774 "traddr": "10.0.0.2", 00:41:31.774 "adrfam": "ipv4", 00:41:31.774 "trsvcid": "4420", 00:41:31.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:31.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:31.774 "hdgst": false, 00:41:31.774 "ddgst": false 00:41:31.774 }, 00:41:31.774 "method": "bdev_nvme_attach_controller" 00:41:31.774 }' 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:31.774 "params": { 00:41:31.774 "name": "Nvme1", 00:41:31.774 "trtype": "tcp", 00:41:31.774 "traddr": "10.0.0.2", 00:41:31.774 "adrfam": "ipv4", 00:41:31.774 "trsvcid": "4420", 00:41:31.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:31.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:31.774 "hdgst": false, 00:41:31.774 "ddgst": false 00:41:31.774 }, 00:41:31.774 "method": "bdev_nvme_attach_controller" 00:41:31.774 }' 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:31.774 "params": { 00:41:31.774 "name": "Nvme1", 00:41:31.774 "trtype": "tcp", 00:41:31.774 "traddr": "10.0.0.2", 00:41:31.774 "adrfam": "ipv4", 00:41:31.774 "trsvcid": "4420", 00:41:31.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:31.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:31.774 "hdgst": false, 00:41:31.774 "ddgst": false 00:41:31.774 }, 00:41:31.774 "method": "bdev_nvme_attach_controller" 00:41:31.774 }' 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:41:31.774 03:38:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:31.774 "params": { 00:41:31.774 "name": "Nvme1", 00:41:31.774 "trtype": "tcp", 00:41:31.774 "traddr": "10.0.0.2", 00:41:31.774 "adrfam": "ipv4", 00:41:31.774 "trsvcid": "4420", 00:41:31.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:31.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:31.774 "hdgst": false, 00:41:31.774 "ddgst": false 00:41:31.774 }, 00:41:31.774 "method": "bdev_nvme_attach_controller" 00:41:31.774 }' 00:41:31.774 [2024-06-11 03:38:13.061896] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:41:31.774 [2024-06-11 03:38:13.061940] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:41:31.774 [2024-06-11 03:38:13.066003] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:41:31.774 [2024-06-11 03:38:13.066047] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:41:31.774 [2024-06-11 03:38:13.067007] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:41:31.774 [2024-06-11 03:38:13.067084] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:41:31.774 [2024-06-11 03:38:13.068721] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:41:31.774 [2024-06-11 03:38:13.068766] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:41:31.774 EAL: No free 2048 kB hugepages reported on node 1 00:41:32.033 EAL: No free 2048 kB hugepages reported on node 1 00:41:32.033 [2024-06-11 03:38:13.241372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:32.033 [2024-06-11 03:38:13.267755] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:41:32.033 EAL: No free 2048 kB hugepages reported on node 1 00:41:32.033 [2024-06-11 03:38:13.340961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:32.033 [2024-06-11 03:38:13.367568] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:41:32.033 EAL: No free 2048 kB hugepages reported on node 1 00:41:32.033 [2024-06-11 03:38:13.433981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:32.292 [2024-06-11 03:38:13.466544] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:41:32.292 [2024-06-11 03:38:13.491493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:32.292 [2024-06-11 03:38:13.518354] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:41:32.292 Running I/O for 1 seconds... 00:41:32.292 Running I/O for 1 seconds... 00:41:32.550 Running I/O for 1 seconds... 00:41:32.550 Running I/O for 1 seconds... 00:41:33.512 00:41:33.512 Latency(us) 00:41:33.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:33.512 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:41:33.512 Nvme1n1 : 1.00 15670.21 61.21 0.00 0.00 8147.89 4400.27 17351.44 00:41:33.512 =================================================================================================================== 00:41:33.512 Total : 15670.21 61.21 0.00 0.00 8147.89 4400.27 17351.44 00:41:33.512 00:41:33.512 Latency(us) 00:41:33.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:33.512 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:41:33.512 Nvme1n1 : 1.01 6288.33 24.56 0.00 0.00 20145.97 11109.91 34952.53 00:41:33.512 =================================================================================================================== 00:41:33.512 Total : 6288.33 24.56 0.00 0.00 20145.97 11109.91 34952.53 00:41:33.512 00:41:33.512 Latency(us) 00:41:33.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:33.512 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:41:33.512 Nvme1n1 : 1.00 252085.70 984.71 0.00 0.00 505.90 205.78 616.35 00:41:33.512 =================================================================================================================== 00:41:33.512 Total : 252085.70 984.71 0.00 0.00 505.90 205.78 616.35 00:41:33.512 00:41:33.512 Latency(us) 00:41:33.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:33.513 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:41:33.513 Nvme1n1 : 1.01 6851.40 26.76 0.00 0.00 18629.40 5055.63 46686.60 00:41:33.513 =================================================================================================================== 00:41:33.513 Total : 6851.40 26.76 0.00 0.00 18629.40 5055.63 46686.60 00:41:33.772 03:38:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2155299 00:41:33.772 03:38:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2155301 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2155304 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:33.772 rmmod nvme_tcp 00:41:33.772 rmmod nvme_fabrics 00:41:33.772 rmmod nvme_keyring 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2155209 ']' 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2155209 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 2155209 ']' 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 2155209 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2155209 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2155209' 00:41:33.772 killing process with pid 2155209 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 2155209 00:41:33.772 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 2155209 00:41:34.070 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:34.070 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:34.070 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:34.070 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:34.070 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:34.070 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:34.070 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:34.070 03:38:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:36.011 03:38:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:36.011 00:41:36.011 real 0m11.182s 00:41:36.011 user 0m16.802s 00:41:36.011 sys 0m6.390s 00:41:36.011 03:38:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:36.011 03:38:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:36.011 ************************************ 00:41:36.011 END TEST nvmf_bdev_io_wait 00:41:36.011 ************************************ 00:41:36.011 03:38:17 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:41:36.011 03:38:17 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:41:36.011 03:38:17 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:36.011 03:38:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:36.270 ************************************ 00:41:36.270 START TEST nvmf_queue_depth 00:41:36.270 ************************************ 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:41:36.270 * Looking for test storage... 00:41:36.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:41:36.270 03:38:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:41:41.544 Found 0000:86:00.0 (0x8086 - 0x159b) 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:41.544 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:41:41.545 Found 0000:86:00.1 (0x8086 - 0x159b) 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:41:41.545 Found net devices under 0000:86:00.0: cvl_0_0 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:41:41.545 Found net devices under 0000:86:00.1: cvl_0_1 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:41:41.545 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:41.804 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:41.804 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:41.804 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:41:41.804 03:38:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:41:41.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:41.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:41:41.804 00:41:41.804 --- 10.0.0.2 ping statistics --- 00:41:41.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:41.804 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:41.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:41.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:41:41.804 00:41:41.804 --- 10.0.0.1 ping statistics --- 00:41:41.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:41.804 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2159364 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2159364 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 2159364 ']' 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:41.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:41.804 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:41.804 [2024-06-11 03:38:23.141720] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:41:41.804 [2024-06-11 03:38:23.141760] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:41.804 EAL: No free 2048 kB hugepages reported on node 1 00:41:41.804 [2024-06-11 03:38:23.202701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:42.064 [2024-06-11 03:38:23.242141] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:42.064 [2024-06-11 03:38:23.242179] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:42.064 [2024-06-11 03:38:23.242186] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:42.064 [2024-06-11 03:38:23.242192] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:42.064 [2024-06-11 03:38:23.242198] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:42.064 [2024-06-11 03:38:23.242219] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.064 [2024-06-11 03:38:23.366119] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.064 Malloc0 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.064 [2024-06-11 03:38:23.420541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2159396 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2159396 /var/tmp/bdevperf.sock 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 2159396 ']' 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:42.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:41:42.064 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.064 [2024-06-11 03:38:23.454532] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:41:42.064 [2024-06-11 03:38:23.454575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159396 ] 00:41:42.323 EAL: No free 2048 kB hugepages reported on node 1 00:41:42.323 [2024-06-11 03:38:23.513121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:42.323 [2024-06-11 03:38:23.552617] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:41:42.323 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:41:42.323 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:41:42.323 03:38:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:41:42.323 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:41:42.323 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:42.323 NVMe0n1 00:41:42.323 03:38:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:41:42.323 03:38:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:42.583 Running I/O for 10 seconds... 00:41:52.565 00:41:52.565 Latency(us) 00:41:52.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:52.565 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:41:52.565 Verification LBA range: start 0x0 length 0x4000 00:41:52.565 NVMe0n1 : 10.06 12709.35 49.65 0.00 0.00 80327.78 19099.06 58919.98 00:41:52.565 =================================================================================================================== 00:41:52.565 Total : 12709.35 49.65 0.00 0.00 80327.78 19099.06 58919.98 00:41:52.565 0 00:41:52.565 03:38:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2159396 00:41:52.565 03:38:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 2159396 ']' 00:41:52.565 03:38:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 2159396 00:41:52.565 03:38:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:41:52.565 03:38:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:52.565 03:38:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2159396 00:41:52.565 03:38:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:41:52.565 03:38:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:41:52.565 03:38:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2159396' 00:41:52.565 killing process with pid 2159396 00:41:52.565 03:38:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 2159396 00:41:52.565 Received shutdown signal, test time was about 10.000000 seconds 00:41:52.565 00:41:52.565 Latency(us) 00:41:52.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:52.565 =================================================================================================================== 00:41:52.565 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:52.565 03:38:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 2159396 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:52.825 rmmod nvme_tcp 00:41:52.825 rmmod nvme_fabrics 00:41:52.825 rmmod nvme_keyring 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2159364 ']' 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2159364 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 2159364 ']' 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 2159364 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2159364 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2159364' 00:41:52.825 killing process with pid 2159364 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 2159364 00:41:52.825 03:38:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 2159364 00:41:53.084 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:53.084 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:53.084 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:53.084 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:53.084 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:53.084 03:38:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:53.084 03:38:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:53.084 03:38:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:55.623 03:38:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:41:55.623 00:41:55.623 real 0m19.037s 00:41:55.623 user 0m22.428s 00:41:55.623 sys 0m5.694s 00:41:55.623 03:38:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:41:55.623 03:38:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:55.623 ************************************ 00:41:55.623 END TEST nvmf_queue_depth 00:41:55.623 ************************************ 00:41:55.623 03:38:36 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:41:55.623 03:38:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:41:55.623 03:38:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:41:55.623 03:38:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:55.623 ************************************ 00:41:55.623 START TEST nvmf_target_multipath 00:41:55.623 ************************************ 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:41:55.623 * Looking for test storage... 00:41:55.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.623 03:38:36 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:41:55.624 03:38:36 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:42:02.195 Found 0000:86:00.0 (0x8086 - 0x159b) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:42:02.195 Found 0000:86:00.1 (0x8086 - 0x159b) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:42:02.195 Found net devices under 0000:86:00.0: cvl_0_0 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:42:02.195 Found net devices under 0000:86:00.1: cvl_0_1 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:02.195 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:42:02.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:02.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:42:02.196 00:42:02.196 --- 10.0.0.2 ping statistics --- 00:42:02.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:02.196 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:02.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:02.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:42:02.196 00:42:02.196 --- 10.0.0.1 ping statistics --- 00:42:02.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:02.196 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:42:02.196 only one NIC for nvmf test 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:02.196 rmmod nvme_tcp 00:42:02.196 rmmod nvme_fabrics 00:42:02.196 rmmod nvme_keyring 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:02.196 03:38:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:03.575 00:42:03.575 real 0m8.342s 00:42:03.575 user 0m1.771s 00:42:03.575 sys 0m4.574s 00:42:03.575 03:38:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:03.576 03:38:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:03.576 ************************************ 00:42:03.576 END TEST nvmf_target_multipath 00:42:03.576 ************************************ 00:42:03.576 03:38:44 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:42:03.576 03:38:44 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:42:03.576 03:38:44 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:42:03.576 03:38:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:03.576 ************************************ 00:42:03.576 START TEST nvmf_zcopy 00:42:03.576 ************************************ 00:42:03.576 03:38:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:42:03.834 * Looking for test storage... 00:42:03.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:42:03.834 03:38:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:10.446 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:10.446 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:42:10.446 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:42:10.446 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:42:10.447 Found 0000:86:00.0 (0x8086 - 0x159b) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:42:10.447 Found 0000:86:00.1 (0x8086 - 0x159b) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:42:10.447 Found net devices under 0000:86:00.0: cvl_0_0 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:42:10.447 Found net devices under 0000:86:00.1: cvl_0_1 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:42:10.447 03:38:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:42:10.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:10.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:42:10.447 00:42:10.447 --- 10.0.0.2 ping statistics --- 00:42:10.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:10.447 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:10.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:10.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:42:10.447 00:42:10.447 --- 10.0.0.1 ping statistics --- 00:42:10.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:10.447 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:42:10.447 03:38:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:42:10.448 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:42:10.448 03:38:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:42:10.448 03:38:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:10.448 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2168821 00:42:10.448 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:42:10.448 03:38:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2168821 00:42:10.448 03:38:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 2168821 ']' 00:42:10.448 03:38:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:10.448 03:38:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:42:10.448 03:38:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:10.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:10.448 03:38:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:42:10.448 03:38:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:10.448 [2024-06-11 03:38:51.291469] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:42:10.448 [2024-06-11 03:38:51.291528] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:10.448 EAL: No free 2048 kB hugepages reported on node 1 00:42:10.448 [2024-06-11 03:38:51.355951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:10.448 [2024-06-11 03:38:51.395821] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:10.448 [2024-06-11 03:38:51.395860] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:10.448 [2024-06-11 03:38:51.395867] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:10.448 [2024-06-11 03:38:51.395874] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:10.448 [2024-06-11 03:38:51.395879] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:10.448 [2024-06-11 03:38:51.395898] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:42:10.707 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:42:10.707 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:42:10.707 03:38:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:42:10.707 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:42:10.707 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:10.707 03:38:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:10.707 03:38:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:42:10.707 03:38:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:42:10.707 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:10.707 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:10.707 [2024-06-11 03:38:52.108039] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:10.966 [2024-06-11 03:38:52.124205] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:10.966 malloc0 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:10.966 03:38:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:42:10.967 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:10.967 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:10.967 03:38:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:10.967 03:38:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:42:10.967 03:38:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:42:10.967 03:38:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:42:10.967 03:38:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:42:10.967 03:38:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:10.967 03:38:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:10.967 { 00:42:10.967 "params": { 00:42:10.967 "name": "Nvme$subsystem", 00:42:10.967 "trtype": "$TEST_TRANSPORT", 00:42:10.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:10.967 "adrfam": "ipv4", 00:42:10.967 "trsvcid": "$NVMF_PORT", 00:42:10.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:10.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:10.967 "hdgst": ${hdgst:-false}, 00:42:10.967 "ddgst": ${ddgst:-false} 00:42:10.967 }, 00:42:10.967 "method": "bdev_nvme_attach_controller" 00:42:10.967 } 00:42:10.967 EOF 00:42:10.967 )") 00:42:10.967 03:38:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:42:10.967 03:38:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:42:10.967 03:38:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:42:10.967 03:38:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:42:10.967 "params": { 00:42:10.967 "name": "Nvme1", 00:42:10.967 "trtype": "tcp", 00:42:10.967 "traddr": "10.0.0.2", 00:42:10.967 "adrfam": "ipv4", 00:42:10.967 "trsvcid": "4420", 00:42:10.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:10.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:10.967 "hdgst": false, 00:42:10.967 "ddgst": false 00:42:10.967 }, 00:42:10.967 "method": "bdev_nvme_attach_controller" 00:42:10.967 }' 00:42:10.967 [2024-06-11 03:38:52.202291] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:42:10.967 [2024-06-11 03:38:52.202332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169017 ] 00:42:10.967 EAL: No free 2048 kB hugepages reported on node 1 00:42:10.967 [2024-06-11 03:38:52.262074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:10.967 [2024-06-11 03:38:52.301939] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:42:11.226 Running I/O for 10 seconds... 00:42:21.205 00:42:21.205 Latency(us) 00:42:21.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:21.205 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:42:21.205 Verification LBA range: start 0x0 length 0x1000 00:42:21.205 Nvme1n1 : 10.01 8904.32 69.56 0.00 0.00 14334.02 1708.62 24217.11 00:42:21.205 =================================================================================================================== 00:42:21.205 Total : 8904.32 69.56 0.00 0.00 14334.02 1708.62 24217.11 00:42:21.464 03:39:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2170802 00:42:21.464 03:39:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:42:21.464 03:39:02 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:21.464 03:39:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:42:21.464 03:39:02 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:42:21.464 03:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:42:21.464 03:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:42:21.464 03:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:42:21.464 03:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:42:21.464 { 00:42:21.464 "params": { 00:42:21.464 "name": "Nvme$subsystem", 00:42:21.464 "trtype": "$TEST_TRANSPORT", 00:42:21.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:21.464 "adrfam": "ipv4", 00:42:21.464 "trsvcid": "$NVMF_PORT", 00:42:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:21.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:21.464 "hdgst": ${hdgst:-false}, 00:42:21.464 "ddgst": ${ddgst:-false} 00:42:21.464 }, 00:42:21.464 "method": "bdev_nvme_attach_controller" 00:42:21.464 } 00:42:21.464 EOF 00:42:21.464 )") 00:42:21.464 [2024-06-11 03:39:02.702563] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.464 [2024-06-11 03:39:02.702596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.464 03:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:42:21.464 03:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:42:21.464 [2024-06-11 03:39:02.710551] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.464 [2024-06-11 03:39:02.710561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.464 03:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:42:21.464 03:39:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:42:21.464 "params": { 00:42:21.464 "name": "Nvme1", 00:42:21.464 "trtype": "tcp", 00:42:21.464 "traddr": "10.0.0.2", 00:42:21.464 "adrfam": "ipv4", 00:42:21.464 "trsvcid": "4420", 00:42:21.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:21.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:21.464 "hdgst": false, 00:42:21.464 "ddgst": false 00:42:21.464 }, 00:42:21.464 "method": "bdev_nvme_attach_controller" 00:42:21.464 }' 00:42:21.464 [2024-06-11 03:39:02.718567] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.464 [2024-06-11 03:39:02.718577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.464 [2024-06-11 03:39:02.726588] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.464 [2024-06-11 03:39:02.726597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.464 [2024-06-11 03:39:02.734612] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.464 [2024-06-11 03:39:02.734621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.464 [2024-06-11 03:39:02.737510] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:42:21.464 [2024-06-11 03:39:02.737553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170802 ] 00:42:21.464 [2024-06-11 03:39:02.742632] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.464 [2024-06-11 03:39:02.742641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.464 [2024-06-11 03:39:02.750653] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.464 [2024-06-11 03:39:02.750662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.464 [2024-06-11 03:39:02.758676] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.464 [2024-06-11 03:39:02.758690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.464 EAL: No free 2048 kB hugepages reported on node 1 00:42:21.464 [2024-06-11 03:39:02.766698] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.464 [2024-06-11 03:39:02.766707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.464 [2024-06-11 03:39:02.774720] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.464 [2024-06-11 03:39:02.774728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.464 [2024-06-11 03:39:02.782741] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.464 [2024-06-11 03:39:02.782750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.464 [2024-06-11 03:39:02.790761] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.464 [2024-06-11 03:39:02.790770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.464 [2024-06-11 03:39:02.795903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:21.464 [2024-06-11 03:39:02.798783] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.464 [2024-06-11 03:39:02.798792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.464 [2024-06-11 03:39:02.806806] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.464 [2024-06-11 03:39:02.806816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.464 [2024-06-11 03:39:02.814825] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.464 [2024-06-11 03:39:02.814838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.464 [2024-06-11 03:39:02.822851] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.464 [2024-06-11 03:39:02.822871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.465 [2024-06-11 03:39:02.830868] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.465 [2024-06-11 03:39:02.830878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.465 [2024-06-11 03:39:02.835838] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:42:21.465 [2024-06-11 03:39:02.838891] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.465 [2024-06-11 03:39:02.838902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.465 [2024-06-11 03:39:02.846917] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.465 [2024-06-11 03:39:02.846931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.465 [2024-06-11 03:39:02.854944] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.465 [2024-06-11 03:39:02.854960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.465 [2024-06-11 03:39:02.862957] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.465 [2024-06-11 03:39:02.862967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.870973] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.870984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.878995] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.879006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.887019] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.887030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.895059] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.895070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.903064] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.903079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.911081] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.911089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.919116] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.919135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.927129] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.927141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.935150] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.935160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.943173] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.943187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.951192] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.951203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.959214] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.959225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.967236] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.967245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.975258] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.975267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.983286] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.983299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.991304] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.991312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:02.999331] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:02.999345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:03.007349] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:03.007360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:03.015369] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:03.015379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:03.023392] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:03.023402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:03.031412] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:03.031421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:03.039435] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:03.039445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:03.047457] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:03.047468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:03.055477] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:03.055490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:03.063498] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:03.063507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:03.071522] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:03.071531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.724 [2024-06-11 03:39:03.079545] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.724 [2024-06-11 03:39:03.079554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.725 [2024-06-11 03:39:03.087569] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.725 [2024-06-11 03:39:03.087581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.725 [2024-06-11 03:39:03.095591] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.725 [2024-06-11 03:39:03.095601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.725 [2024-06-11 03:39:03.103622] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.725 [2024-06-11 03:39:03.103639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.725 Running I/O for 5 seconds... 00:42:21.725 [2024-06-11 03:39:03.111636] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.725 [2024-06-11 03:39:03.111646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.725 [2024-06-11 03:39:03.123979] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.725 [2024-06-11 03:39:03.124000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.131331] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.131351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.138837] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.138857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.148234] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.148254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.157534] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.157554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.166691] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.166709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.175389] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.175408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.184577] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.184595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.193721] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.193739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.202813] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.202831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.212164] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.212182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.220694] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.220711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.230322] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.230340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.238952] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.238970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.247718] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.247736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.256111] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.256129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.264974] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.264993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.273606] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.273625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.282649] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.282667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.291166] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.291185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.300330] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.300348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.309389] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.309406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.317810] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.317828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.326353] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.326371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.334603] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.334621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.343484] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.343502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.352535] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.352553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.361912] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.361930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.371281] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.371299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.380356] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.380375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:21.985 [2024-06-11 03:39:03.389067] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:21.985 [2024-06-11 03:39:03.389085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.398121] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.398140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.406581] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.406599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.415652] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.415669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.422596] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.422613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.433003] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.433027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.442260] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.442278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.451439] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.451456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.460426] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.460443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.469519] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.469537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.478056] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.478074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.486718] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.486736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.495616] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.495634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.504540] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.504558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.514019] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.514037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.522668] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.522686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.531747] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.531765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.540779] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.540797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.548991] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.549013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.557437] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.557454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.566446] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.566462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.575425] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.575442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.584402] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.584419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.592771] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.592788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.601753] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.601771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.610643] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.610660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.619652] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.619669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.628601] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.628619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.637725] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.637743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.245 [2024-06-11 03:39:03.646411] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.245 [2024-06-11 03:39:03.646428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.655376] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.655393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.664155] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.664173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.670937] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.670954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.681604] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.681622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.690264] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.690281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.698755] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.698772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.707747] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.707766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.716977] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.716999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.726046] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.726063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.735148] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.735166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.744194] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.744211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.753348] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.753365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.762422] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.762440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.771181] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.771197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.780906] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.780924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.789497] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.789514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.799048] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.799066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.807469] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.807487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.816986] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.817003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.825651] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.825669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.834738] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.834756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.843959] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.843977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.852985] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.853002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.862604] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.862622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.871039] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.871057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.880038] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.880056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.889184] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.889205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.898497] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.898514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.505 [2024-06-11 03:39:03.908184] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.505 [2024-06-11 03:39:03.908202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:03.916811] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:03.916829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:03.925949] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:03.925967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:03.934796] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:03.934813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:03.943612] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:03.943629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:03.952125] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:03.952142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:03.961065] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:03.961083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:03.969546] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:03.969563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:03.978652] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:03.978669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:03.988388] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:03.988406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:03.997581] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:03.997599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.006607] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.006626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.015604] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.015622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.025070] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.025089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.033587] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.033606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.043051] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.043068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.051821] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.051838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.060871] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.060892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.069150] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.069167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.077890] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.077920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.086875] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.086892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.095353] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.095370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.104645] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.104663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.113515] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.113533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.122518] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.122535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.131631] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.131650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.140648] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.140666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.149722] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.149740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.158333] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.158350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:22.765 [2024-06-11 03:39:04.167290] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:22.765 [2024-06-11 03:39:04.167307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.176204] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.176222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.184582] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.184599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.193530] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.193547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.202682] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.202701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.211857] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.211874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.220668] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.220685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.229477] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.229499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.238664] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.238682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.248102] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.248120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.256479] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.256497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.265436] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.265453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.274384] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.274401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.283402] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.283419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.292593] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.292610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.301611] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.301629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.310576] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.310594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.320078] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.320095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.328587] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.328603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.337090] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.337107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.346328] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.346346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.355420] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.355437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.364803] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.364820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.373090] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.373107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.381532] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.381549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.390397] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.390414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.399394] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.399411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.408616] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.408634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.024 [2024-06-11 03:39:04.417628] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.024 [2024-06-11 03:39:04.417645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.025 [2024-06-11 03:39:04.426709] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.025 [2024-06-11 03:39:04.426726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.435430] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.435448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.444453] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.444470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.453893] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.453911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.462996] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.463018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.471942] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.471960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.480846] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.480863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.489767] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.489784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.498768] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.498786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.508342] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.508361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.517108] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.517127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.526868] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.526886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.535435] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.535453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.544735] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.544755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.553552] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.553571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.561999] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.562024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.571053] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.571072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.579897] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.579915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.588379] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.588397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.597895] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.597913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.606627] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.606646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.615683] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.615700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.624059] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.624077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.632990] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.633008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.642135] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.642153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.651437] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.651455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.660376] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.660394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.669868] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.669887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.678507] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.678525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.284 [2024-06-11 03:39:04.687985] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.284 [2024-06-11 03:39:04.688002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.696550] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.696568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.705161] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.705179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.713849] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.713867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.722959] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.722977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.732173] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.732190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.741108] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.741126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.749893] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.749911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.758730] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.758749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.767861] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.767879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.776798] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.776815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.786247] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.786264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.794592] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.794609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.803366] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.803385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.811583] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.811600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.821087] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.821104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.829623] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.829640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.838669] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.838686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.847647] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.847665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.856624] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.856642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.865089] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.865106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.873950] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.873967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.882876] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.882894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.891193] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.891209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.900200] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.900219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.909326] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.909344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.918956] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.918974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.928218] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.928236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.937100] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.937118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.544 [2024-06-11 03:39:04.946185] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.544 [2024-06-11 03:39:04.946203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:04.954528] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:04.954546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:04.963639] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:04.963656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:04.972572] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:04.972589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:04.980827] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:04.980844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:04.989588] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:04.989605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:04.998577] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:04.998594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.007407] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.007424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.016130] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.016147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.025164] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.025182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.034308] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.034325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.042831] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.042850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.051439] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.051458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.060147] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.060165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.068890] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.068911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.078110] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.078127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.087272] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.087290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.095948] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.095965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.104987] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.105005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.113263] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.113280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.121789] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.121807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.130172] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.130190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.139218] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.139236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.147782] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.147799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.156655] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.156671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.164770] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.164788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.173647] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.173665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.182687] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.182705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.191473] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.191490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:23.804 [2024-06-11 03:39:05.199934] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:23.804 [2024-06-11 03:39:05.199952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.208740] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.208759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.217083] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.217101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.225996] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.226019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.235377] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.235399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.244108] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.244126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.253372] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.253389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.262193] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.262210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.270869] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.270887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.279872] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.279890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.288239] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.288256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.297006] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.297029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.305316] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.305333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.311946] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.311962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.322912] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.322930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.331665] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.331683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.340522] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.340540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.349609] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.349626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.358143] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.358161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.367060] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.367077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.375996] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.376019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.385185] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.385203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.393916] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.393933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.402801] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.402824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.411744] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.411762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.420459] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.420476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.428893] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.428910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.437769] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.437786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.446734] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.064 [2024-06-11 03:39:05.446751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.064 [2024-06-11 03:39:05.455812] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.065 [2024-06-11 03:39:05.455829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.065 [2024-06-11 03:39:05.464730] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.065 [2024-06-11 03:39:05.464748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.473650] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.473668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.482982] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.483000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.491522] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.491539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.499880] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.499897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.508267] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.508284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.516758] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.516775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.526169] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.526187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.534848] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.534866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.543793] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.543810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.552726] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.552744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.562239] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.562257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.570939] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.570960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.580167] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.580185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.589242] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.589260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.595942] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.595959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.606631] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.606649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.615631] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.615649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.624516] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.624533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.632967] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.632984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.641434] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.641453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.650394] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.650412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.659222] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.659241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.668021] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.668039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.324 [2024-06-11 03:39:05.676938] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.324 [2024-06-11 03:39:05.676956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.325 [2024-06-11 03:39:05.685995] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.325 [2024-06-11 03:39:05.686019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.325 [2024-06-11 03:39:05.695252] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.325 [2024-06-11 03:39:05.695270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.325 [2024-06-11 03:39:05.704152] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.325 [2024-06-11 03:39:05.704170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.325 [2024-06-11 03:39:05.712573] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.325 [2024-06-11 03:39:05.712591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.325 [2024-06-11 03:39:05.721528] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.325 [2024-06-11 03:39:05.721545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.584 [2024-06-11 03:39:05.730575] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.584 [2024-06-11 03:39:05.730593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.584 [2024-06-11 03:39:05.740134] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.584 [2024-06-11 03:39:05.740152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.584 [2024-06-11 03:39:05.748639] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.584 [2024-06-11 03:39:05.748657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.584 [2024-06-11 03:39:05.757697] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.584 [2024-06-11 03:39:05.757715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.584 [2024-06-11 03:39:05.766584] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.584 [2024-06-11 03:39:05.766602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.584 [2024-06-11 03:39:05.776069] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.584 [2024-06-11 03:39:05.776086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.584 [2024-06-11 03:39:05.784616] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.584 [2024-06-11 03:39:05.784633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.584 [2024-06-11 03:39:05.793565] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.584 [2024-06-11 03:39:05.793583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.584 [2024-06-11 03:39:05.801807] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.584 [2024-06-11 03:39:05.801824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.810590] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.810608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.819554] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.819575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.828693] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.828711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.837606] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.837623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.846613] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.846630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.855854] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.855871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.864387] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.864404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.873178] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.873196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.882250] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.882269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.891192] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.891211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.898058] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.898075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.909067] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.909084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.917507] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.917524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.926238] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.926256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.935551] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.935570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.943829] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.943848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.952355] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.952373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.961499] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.961518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.970480] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.970499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.978861] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.978880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.585 [2024-06-11 03:39:05.988297] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.585 [2024-06-11 03:39:05.988315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:05.996740] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:05.996759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.005737] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:06.005755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.014924] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:06.014942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.023394] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:06.023412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.032735] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:06.032753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.041106] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:06.041124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.050244] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:06.050261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.059691] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:06.059709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.068550] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:06.068569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.077284] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:06.077303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.086566] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:06.086584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.095067] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:06.095085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.104500] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:06.104518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.111156] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:06.111174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.121856] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:06.121874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.130234] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:06.130252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.139125] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.844 [2024-06-11 03:39:06.139143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.844 [2024-06-11 03:39:06.148057] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.845 [2024-06-11 03:39:06.148076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.845 [2024-06-11 03:39:06.156802] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.845 [2024-06-11 03:39:06.156819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.845 [2024-06-11 03:39:06.165682] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.845 [2024-06-11 03:39:06.165700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.845 [2024-06-11 03:39:06.174686] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.845 [2024-06-11 03:39:06.174705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.845 [2024-06-11 03:39:06.183274] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.845 [2024-06-11 03:39:06.183291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.845 [2024-06-11 03:39:06.191689] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.845 [2024-06-11 03:39:06.191707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.845 [2024-06-11 03:39:06.201093] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.845 [2024-06-11 03:39:06.201110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.845 [2024-06-11 03:39:06.209912] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.845 [2024-06-11 03:39:06.209930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.845 [2024-06-11 03:39:06.219337] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.845 [2024-06-11 03:39:06.219356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.845 [2024-06-11 03:39:06.227552] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.845 [2024-06-11 03:39:06.227570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.845 [2024-06-11 03:39:06.236516] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.845 [2024-06-11 03:39:06.236535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:24.845 [2024-06-11 03:39:06.246087] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:24.845 [2024-06-11 03:39:06.246106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.104 [2024-06-11 03:39:06.254673] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.104 [2024-06-11 03:39:06.254692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.104 [2024-06-11 03:39:06.262834] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.104 [2024-06-11 03:39:06.262852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.104 [2024-06-11 03:39:06.271750] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.104 [2024-06-11 03:39:06.271768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.104 [2024-06-11 03:39:06.280288] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.104 [2024-06-11 03:39:06.280306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.104 [2024-06-11 03:39:06.289336] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.104 [2024-06-11 03:39:06.289354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.104 [2024-06-11 03:39:06.298811] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.104 [2024-06-11 03:39:06.298828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.104 [2024-06-11 03:39:06.307876] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.104 [2024-06-11 03:39:06.307895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.104 [2024-06-11 03:39:06.316851] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.104 [2024-06-11 03:39:06.316869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.104 [2024-06-11 03:39:06.325742] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.104 [2024-06-11 03:39:06.325760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.104 [2024-06-11 03:39:06.334773] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.104 [2024-06-11 03:39:06.334791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.104 [2024-06-11 03:39:06.343404] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.104 [2024-06-11 03:39:06.343422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.352724] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.352742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.361523] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.361540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.370637] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.370655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.379544] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.379561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.388664] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.388682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.398005] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.398029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.406984] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.407005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.416235] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.416252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.425234] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.425252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.434882] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.434899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.443434] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.443452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.452556] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.452573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.461514] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.461531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.470241] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.470259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.476980] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.476997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.487928] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.487947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.496506] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.496523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.105 [2024-06-11 03:39:06.505530] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.105 [2024-06-11 03:39:06.505547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.364 [2024-06-11 03:39:06.514555] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.364 [2024-06-11 03:39:06.514573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.364 [2024-06-11 03:39:06.523674] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.364 [2024-06-11 03:39:06.523693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.364 [2024-06-11 03:39:06.532721] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.364 [2024-06-11 03:39:06.532739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.364 [2024-06-11 03:39:06.541562] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.364 [2024-06-11 03:39:06.541579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.364 [2024-06-11 03:39:06.550818] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.364 [2024-06-11 03:39:06.550835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.364 [2024-06-11 03:39:06.559853] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.364 [2024-06-11 03:39:06.559870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.364 [2024-06-11 03:39:06.568736] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.364 [2024-06-11 03:39:06.568753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.364 [2024-06-11 03:39:06.577699] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.364 [2024-06-11 03:39:06.577721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.364 [2024-06-11 03:39:06.586630] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.364 [2024-06-11 03:39:06.586647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.364 [2024-06-11 03:39:06.596297] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.364 [2024-06-11 03:39:06.596314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.364 [2024-06-11 03:39:06.604824] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.364 [2024-06-11 03:39:06.604841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.364 [2024-06-11 03:39:06.613084] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.364 [2024-06-11 03:39:06.613100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.364 [2024-06-11 03:39:06.621974] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.364 [2024-06-11 03:39:06.621992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.364 [2024-06-11 03:39:06.631082] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.364 [2024-06-11 03:39:06.631099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.364 [2024-06-11 03:39:06.640242] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.364 [2024-06-11 03:39:06.640260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.364 [2024-06-11 03:39:06.649205] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.364 [2024-06-11 03:39:06.649222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.365 [2024-06-11 03:39:06.658251] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.365 [2024-06-11 03:39:06.658268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.365 [2024-06-11 03:39:06.667217] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.365 [2024-06-11 03:39:06.667235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.365 [2024-06-11 03:39:06.684220] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.365 [2024-06-11 03:39:06.684239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.365 [2024-06-11 03:39:06.693202] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.365 [2024-06-11 03:39:06.693219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.365 [2024-06-11 03:39:06.702775] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.365 [2024-06-11 03:39:06.702793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.365 [2024-06-11 03:39:06.711130] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.365 [2024-06-11 03:39:06.711147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.365 [2024-06-11 03:39:06.720099] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.365 [2024-06-11 03:39:06.720116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.365 [2024-06-11 03:39:06.728885] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.365 [2024-06-11 03:39:06.728902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.365 [2024-06-11 03:39:06.738450] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.365 [2024-06-11 03:39:06.738467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.365 [2024-06-11 03:39:06.747483] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.365 [2024-06-11 03:39:06.747500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.365 [2024-06-11 03:39:06.756457] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.365 [2024-06-11 03:39:06.756480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.365 [2024-06-11 03:39:06.765516] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.365 [2024-06-11 03:39:06.765535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.774050] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.774068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.782554] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.782572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.791756] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.791774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.800523] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.800541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.809302] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.809319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.818200] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.818217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.827165] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.827182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.835498] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.835515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.844284] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.844301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.853465] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.853482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.862568] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.862585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.871531] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.871548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.881005] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.881027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.888023] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.888039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.898643] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.898662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.907015] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.907032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.915994] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.916017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.924442] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.924463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.933511] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.933529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.941795] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.941812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.951170] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.951187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.959475] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.624 [2024-06-11 03:39:06.959493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.624 [2024-06-11 03:39:06.968077] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.625 [2024-06-11 03:39:06.968093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.625 [2024-06-11 03:39:06.976931] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.625 [2024-06-11 03:39:06.976950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.625 [2024-06-11 03:39:06.986511] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.625 [2024-06-11 03:39:06.986529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.625 [2024-06-11 03:39:06.994855] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.625 [2024-06-11 03:39:06.994872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.625 [2024-06-11 03:39:07.003645] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.625 [2024-06-11 03:39:07.003663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.625 [2024-06-11 03:39:07.012495] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.625 [2024-06-11 03:39:07.012512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.625 [2024-06-11 03:39:07.021844] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.625 [2024-06-11 03:39:07.021862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.884 [2024-06-11 03:39:07.030259] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.884 [2024-06-11 03:39:07.030278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.884 [2024-06-11 03:39:07.039498] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.884 [2024-06-11 03:39:07.039516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.884 [2024-06-11 03:39:07.048532] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.884 [2024-06-11 03:39:07.048549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.884 [2024-06-11 03:39:07.057931] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.884 [2024-06-11 03:39:07.057948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.884 [2024-06-11 03:39:07.066335] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.884 [2024-06-11 03:39:07.066352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.884 [2024-06-11 03:39:07.074801] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.884 [2024-06-11 03:39:07.074820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.884 [2024-06-11 03:39:07.083259] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.884 [2024-06-11 03:39:07.083279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.884 [2024-06-11 03:39:07.092478] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.884 [2024-06-11 03:39:07.092496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.884 [2024-06-11 03:39:07.100719] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.884 [2024-06-11 03:39:07.100737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.884 [2024-06-11 03:39:07.109585] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.109602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.118468] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.118485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.127432] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.127449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.136440] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.136457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.145517] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.145534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.155068] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.155085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.163713] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.163731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.172552] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.172569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.181658] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.181675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.190748] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.190766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.199674] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.199691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.208367] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.208384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.217272] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.217290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.226302] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.226319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.235551] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.235569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.244402] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.244420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.253896] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.253913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.262382] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.262400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.270829] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.270846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.280032] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.280050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:25.885 [2024-06-11 03:39:07.288349] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:25.885 [2024-06-11 03:39:07.288366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.297394] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.297412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.305738] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.305755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.314564] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.314582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.324109] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.324126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.333193] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.333210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.342780] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.342798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.351420] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.351439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.361279] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.361298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.369716] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.369733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.378932] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.378951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.387949] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.387967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.396727] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.396746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.405625] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.405644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.414209] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.414227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.422586] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.422603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.431473] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.431491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.440774] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.440792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.449793] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.449811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.458256] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.458273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.467294] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.467312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.476514] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.476533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.485465] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.485484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.494462] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.494480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.503029] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.503047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.511875] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.511892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.520853] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.520871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.530259] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.530277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.537256] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.537274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.145 [2024-06-11 03:39:07.547093] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.145 [2024-06-11 03:39:07.547111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.555901] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.555920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.564737] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.564754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.573620] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.573637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.581882] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.581900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.590868] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.590887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.599802] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.599820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.608983] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.609001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.618304] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.618322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.627322] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.627339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.636445] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.636463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.645927] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.645945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.655057] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.655075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.663878] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.663897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.672728] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.672746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.681286] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.681304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.690241] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.690259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.699200] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.699218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.708274] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.708292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.717189] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.717206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.726241] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.726259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.734488] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.734506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.742870] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.742889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.751747] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.751765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.760696] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.760718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.769476] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.769494] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.778929] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.778948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.787310] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.787339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.796366] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.796384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.405 [2024-06-11 03:39:07.805574] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.405 [2024-06-11 03:39:07.805591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.813932] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.813950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.822613] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.822631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.831324] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.831342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.840076] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.840093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.849158] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.849175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.857555] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.857572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.866537] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.866555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.875382] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.875399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.884658] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.884676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.892881] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.892898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.901795] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.901812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.910827] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.910844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.919877] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.919895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.928140] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.928161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.937142] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.937159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.945624] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.945641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.954536] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.954553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.963557] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.963574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.970338] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.970355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.980955] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.980972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.989474] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.665 [2024-06-11 03:39:07.989491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.665 [2024-06-11 03:39:07.998023] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.666 [2024-06-11 03:39:07.998041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.666 [2024-06-11 03:39:08.006591] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.666 [2024-06-11 03:39:08.006608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.666 [2024-06-11 03:39:08.013694] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.666 [2024-06-11 03:39:08.013711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.666 [2024-06-11 03:39:08.023750] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.666 [2024-06-11 03:39:08.023768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.666 [2024-06-11 03:39:08.031994] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.666 [2024-06-11 03:39:08.032018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.666 [2024-06-11 03:39:08.041132] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.666 [2024-06-11 03:39:08.041150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.666 [2024-06-11 03:39:08.047762] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.666 [2024-06-11 03:39:08.047779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.666 [2024-06-11 03:39:08.058219] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.666 [2024-06-11 03:39:08.058236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.666 [2024-06-11 03:39:08.066907] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.666 [2024-06-11 03:39:08.066925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.076004] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.076030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.084520] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.084537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.093857] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.093880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.102376] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.102396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.111375] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.111393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.120425] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.120443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.127071] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.127088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 00:42:26.925 Latency(us) 00:42:26.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:26.925 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:42:26.925 Nvme1n1 : 5.00 17308.68 135.22 0.00 0.00 7388.62 3136.37 17226.61 00:42:26.925 =================================================================================================================== 00:42:26.925 Total : 17308.68 135.22 0.00 0.00 7388.62 3136.37 17226.61 00:42:26.925 [2024-06-11 03:39:08.135041] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.135055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.143062] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.143076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.151084] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.151098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.159109] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.159127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.167119] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.167131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.175145] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.175157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.183161] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.183174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.191185] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.191196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.199200] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.199212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.207222] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.207234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.215241] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.215253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.223265] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.223283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.231285] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.231295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.239307] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.239316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.247334] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.247347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.255352] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.255364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.263371] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.263380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.271392] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.925 [2024-06-11 03:39:08.271402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.925 [2024-06-11 03:39:08.279415] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.926 [2024-06-11 03:39:08.279426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.926 [2024-06-11 03:39:08.287437] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.926 [2024-06-11 03:39:08.287447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.926 [2024-06-11 03:39:08.295457] subsystem.c:2037:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:26.926 [2024-06-11 03:39:08.295466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:26.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2170802) - No such process 00:42:26.926 03:39:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2170802 00:42:26.926 03:39:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:26.926 03:39:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:26.926 03:39:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:26.926 03:39:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:26.926 03:39:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:26.926 03:39:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:26.926 03:39:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:26.926 delay0 00:42:26.926 03:39:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:26.926 03:39:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:42:26.926 03:39:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:26.926 03:39:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:26.926 03:39:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:26.926 03:39:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:42:27.185 EAL: No free 2048 kB hugepages reported on node 1 00:42:27.185 [2024-06-11 03:39:08.400872] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:42:33.753 [2024-06-11 03:39:14.504762] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19db730 is same with the state(5) to be set 00:42:33.753 [2024-06-11 03:39:14.504800] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19db730 is same with the state(5) to be set 00:42:33.753 Initializing NVMe Controllers 00:42:33.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:33.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:33.753 Initialization complete. Launching workers. 00:42:33.753 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 118 00:42:33.753 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 397, failed to submit 41 00:42:33.753 success 233, unsuccess 164, failed 0 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:33.753 rmmod nvme_tcp 00:42:33.753 rmmod nvme_fabrics 00:42:33.753 rmmod nvme_keyring 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2168821 ']' 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2168821 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 2168821 ']' 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 2168821 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2168821 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2168821' 00:42:33.753 killing process with pid 2168821 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 2168821 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 2168821 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:33.753 03:39:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:35.659 03:39:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:35.659 00:42:35.659 real 0m31.933s 00:42:35.659 user 0m42.110s 00:42:35.659 sys 0m10.808s 00:42:35.659 03:39:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:35.659 03:39:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:35.659 ************************************ 00:42:35.659 END TEST nvmf_zcopy 00:42:35.659 ************************************ 00:42:35.659 03:39:16 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:42:35.659 03:39:16 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:42:35.659 03:39:16 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:42:35.659 03:39:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:35.659 ************************************ 00:42:35.659 START TEST nvmf_nmic 00:42:35.659 ************************************ 00:42:35.659 03:39:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:42:35.659 * Looking for test storage... 00:42:35.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:42:35.659 03:39:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:42:42.302 Found 0000:86:00.0 (0x8086 - 0x159b) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:42:42.302 Found 0000:86:00.1 (0x8086 - 0x159b) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:42:42.302 Found net devices under 0000:86:00.0: cvl_0_0 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:42:42.302 Found net devices under 0000:86:00.1: cvl_0_1 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:42:42.302 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:42:42.303 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:42:42.303 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:42.303 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:42.303 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:42.303 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:42:42.303 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:42.303 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:42.303 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:42:42.303 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:42.303 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:42.303 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:42:42.303 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:42:42.303 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:42:42.303 03:39:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:42:42.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:42.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:42:42.303 00:42:42.303 --- 10.0.0.2 ping statistics --- 00:42:42.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:42.303 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:42.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:42.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:42:42.303 00:42:42.303 --- 10.0.0.1 ping statistics --- 00:42:42.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:42.303 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2177043 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2177043 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 2177043 ']' 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:42.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:42.303 [2024-06-11 03:39:23.249338] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:42:42.303 [2024-06-11 03:39:23.249380] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:42.303 EAL: No free 2048 kB hugepages reported on node 1 00:42:42.303 [2024-06-11 03:39:23.312684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:42.303 [2024-06-11 03:39:23.353921] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:42.303 [2024-06-11 03:39:23.353959] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:42.303 [2024-06-11 03:39:23.353965] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:42.303 [2024-06-11 03:39:23.353971] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:42.303 [2024-06-11 03:39:23.353976] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:42.303 [2024-06-11 03:39:23.354103] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:42:42.303 [2024-06-11 03:39:23.354275] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:42:42.303 [2024-06-11 03:39:23.354342] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:42:42.303 [2024-06-11 03:39:23.354343] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:42.303 [2024-06-11 03:39:23.504891] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:42.303 Malloc0 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:42.303 [2024-06-11 03:39:23.556505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:42:42.303 test case1: single bdev can't be used in multiple subsystems 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:42.303 [2024-06-11 03:39:23.580433] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:42:42.303 [2024-06-11 03:39:23.580453] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:42:42.303 [2024-06-11 03:39:23.580460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:42.303 request: 00:42:42.303 { 00:42:42.303 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:42:42.303 "namespace": { 00:42:42.303 "bdev_name": "Malloc0", 00:42:42.303 "no_auto_visible": false 00:42:42.303 }, 00:42:42.303 "method": "nvmf_subsystem_add_ns", 00:42:42.303 "req_id": 1 00:42:42.303 } 00:42:42.303 Got JSON-RPC error response 00:42:42.303 response: 00:42:42.303 { 00:42:42.303 "code": -32602, 00:42:42.303 "message": "Invalid parameters" 00:42:42.303 } 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:42:42.303 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:42:42.304 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:42:42.304 Adding namespace failed - expected result. 00:42:42.304 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:42:42.304 test case2: host connect to nvmf target in multiple paths 00:42:42.304 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:42:42.304 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:42:42.304 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:42.304 [2024-06-11 03:39:23.592548] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:42:42.304 03:39:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:42:42.304 03:39:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:43.681 03:39:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:42:44.618 03:39:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:42:44.618 03:39:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:42:44.618 03:39:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:42:44.618 03:39:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:42:44.618 03:39:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:42:46.524 03:39:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:42:46.524 03:39:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:42:46.524 03:39:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:42:46.524 03:39:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:42:46.524 03:39:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:42:46.524 03:39:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:42:46.524 03:39:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:46.524 [global] 00:42:46.524 thread=1 00:42:46.524 invalidate=1 00:42:46.524 rw=write 00:42:46.524 time_based=1 00:42:46.524 runtime=1 00:42:46.524 ioengine=libaio 00:42:46.524 direct=1 00:42:46.524 bs=4096 00:42:46.524 iodepth=1 00:42:46.524 norandommap=0 00:42:46.524 numjobs=1 00:42:46.524 00:42:46.524 verify_dump=1 00:42:46.524 verify_backlog=512 00:42:46.524 verify_state_save=0 00:42:46.524 do_verify=1 00:42:46.524 verify=crc32c-intel 00:42:46.524 [job0] 00:42:46.524 filename=/dev/nvme0n1 00:42:46.781 Could not set queue depth (nvme0n1) 00:42:47.039 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:47.039 fio-3.35 00:42:47.039 Starting 1 thread 00:42:48.418 00:42:48.418 job0: (groupid=0, jobs=1): err= 0: pid=2177897: Tue Jun 11 03:39:29 2024 00:42:48.418 read: IOPS=22, BW=88.8KiB/s (90.9kB/s)(92.0KiB/1036msec) 00:42:48.418 slat (nsec): min=9190, max=23244, avg=21457.13, stdev=3021.16 00:42:48.418 clat (usec): min=40784, max=41972, avg=41058.29, stdev=283.61 00:42:48.418 lat (usec): min=40807, max=41995, avg=41079.74, stdev=283.70 00:42:48.418 clat percentiles (usec): 00:42:48.418 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:42:48.418 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:48.418 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:42:48.418 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:48.418 | 99.99th=[42206] 00:42:48.418 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:42:48.418 slat (nsec): min=9100, max=40089, avg=10100.25, stdev=1757.92 00:42:48.418 clat (usec): min=148, max=406, avg=165.12, stdev=17.63 00:42:48.418 lat (usec): min=157, max=446, avg=175.22, stdev=18.49 00:42:48.418 clat percentiles (usec): 00:42:48.418 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 155], 20.00th=[ 157], 00:42:48.418 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 159], 60.00th=[ 161], 00:42:48.418 | 70.00th=[ 163], 80.00th=[ 172], 90.00th=[ 186], 95.00th=[ 194], 00:42:48.418 | 99.00th=[ 208], 99.50th=[ 262], 99.90th=[ 408], 99.95th=[ 408], 00:42:48.418 | 99.99th=[ 408] 00:42:48.418 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:42:48.418 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:48.418 lat (usec) : 250=95.14%, 500=0.56% 00:42:48.418 lat (msec) : 50=4.30% 00:42:48.418 cpu : usr=0.19%, sys=0.58%, ctx=535, majf=0, minf=2 00:42:48.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:48.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:48.418 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:48.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:48.418 00:42:48.418 Run status group 0 (all jobs): 00:42:48.418 READ: bw=88.8KiB/s (90.9kB/s), 88.8KiB/s-88.8KiB/s (90.9kB/s-90.9kB/s), io=92.0KiB (94.2kB), run=1036-1036msec 00:42:48.418 WRITE: bw=1977KiB/s (2024kB/s), 1977KiB/s-1977KiB/s (2024kB/s-2024kB/s), io=2048KiB (2097kB), run=1036-1036msec 00:42:48.418 00:42:48.418 Disk stats (read/write): 00:42:48.418 nvme0n1: ios=69/512, merge=0/0, ticks=796/80, in_queue=876, util=91.28% 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:48.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:42:48.418 rmmod nvme_tcp 00:42:48.418 rmmod nvme_fabrics 00:42:48.418 rmmod nvme_keyring 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2177043 ']' 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2177043 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 2177043 ']' 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 2177043 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2177043 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2177043' 00:42:48.418 killing process with pid 2177043 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 2177043 00:42:48.418 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 2177043 00:42:48.678 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:42:48.678 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:42:48.678 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:42:48.678 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:42:48.678 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:42:48.678 03:39:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:48.678 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:48.678 03:39:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:50.584 03:39:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:42:50.584 00:42:50.584 real 0m15.024s 00:42:50.584 user 0m33.306s 00:42:50.584 sys 0m5.300s 00:42:50.584 03:39:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:42:50.584 03:39:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:50.584 ************************************ 00:42:50.584 END TEST nvmf_nmic 00:42:50.584 ************************************ 00:42:50.584 03:39:31 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:42:50.584 03:39:31 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:42:50.584 03:39:31 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:42:50.584 03:39:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:50.845 ************************************ 00:42:50.845 START TEST nvmf_fio_target 00:42:50.845 ************************************ 00:42:50.845 03:39:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:42:50.845 * Looking for test storage... 00:42:50.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:50.845 03:39:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:50.845 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:42:50.845 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:50.845 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:50.845 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:50.845 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:50.845 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:50.845 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:50.845 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:50.845 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:50.845 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:50.845 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:50.845 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:42:50.845 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:42:50.846 03:39:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:42:57.423 Found 0000:86:00.0 (0x8086 - 0x159b) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:42:57.423 Found 0000:86:00.1 (0x8086 - 0x159b) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:42:57.423 Found net devices under 0000:86:00.0: cvl_0_0 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:42:57.423 Found net devices under 0000:86:00.1: cvl_0_1 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:42:57.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:57.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:42:57.423 00:42:57.423 --- 10.0.0.2 ping statistics --- 00:42:57.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:57.423 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:42:57.423 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:57.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:57.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:42:57.423 00:42:57.423 --- 10.0.0.1 ping statistics --- 00:42:57.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:57.424 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2181938 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2181938 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 2181938 ']' 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:57.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:42:57.424 03:39:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:57.424 [2024-06-11 03:39:37.961522] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:42:57.424 [2024-06-11 03:39:37.961568] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:57.424 EAL: No free 2048 kB hugepages reported on node 1 00:42:57.424 [2024-06-11 03:39:38.023952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:57.424 [2024-06-11 03:39:38.065452] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:57.424 [2024-06-11 03:39:38.065491] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:57.424 [2024-06-11 03:39:38.065498] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:57.424 [2024-06-11 03:39:38.065504] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:57.424 [2024-06-11 03:39:38.065509] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:57.424 [2024-06-11 03:39:38.065561] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:42:57.424 [2024-06-11 03:39:38.065779] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:42:57.424 [2024-06-11 03:39:38.065848] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:42:57.424 [2024-06-11 03:39:38.065849] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:42:57.424 03:39:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:42:57.424 03:39:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:42:57.424 03:39:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:42:57.424 03:39:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:42:57.424 03:39:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:42:57.424 03:39:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:57.424 03:39:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:42:57.683 [2024-06-11 03:39:38.963627] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:57.683 03:39:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:57.942 03:39:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:42:57.942 03:39:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:58.202 03:39:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:42:58.202 03:39:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:58.202 03:39:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:42:58.202 03:39:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:58.461 03:39:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:42:58.461 03:39:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:42:58.720 03:39:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:58.979 03:39:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:42:58.979 03:39:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:58.979 03:39:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:42:58.979 03:39:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:42:59.238 03:39:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:42:59.238 03:39:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:42:59.496 03:39:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:59.755 03:39:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:42:59.755 03:39:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:59.755 03:39:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:42:59.755 03:39:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:43:00.013 03:39:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:00.013 [2024-06-11 03:39:41.413078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:00.271 03:39:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:43:00.271 03:39:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:43:00.529 03:39:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:01.908 03:39:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:43:01.908 03:39:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:43:01.908 03:39:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:43:01.908 03:39:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:43:01.908 03:39:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:43:01.908 03:39:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:43:03.813 03:39:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:43:03.813 03:39:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:43:03.813 03:39:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:43:03.813 03:39:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:43:03.813 03:39:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:43:03.813 03:39:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:43:03.813 03:39:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:03.813 [global] 00:43:03.813 thread=1 00:43:03.813 invalidate=1 00:43:03.813 rw=write 00:43:03.813 time_based=1 00:43:03.813 runtime=1 00:43:03.813 ioengine=libaio 00:43:03.813 direct=1 00:43:03.813 bs=4096 00:43:03.813 iodepth=1 00:43:03.813 norandommap=0 00:43:03.813 numjobs=1 00:43:03.813 00:43:03.813 verify_dump=1 00:43:03.813 verify_backlog=512 00:43:03.813 verify_state_save=0 00:43:03.813 do_verify=1 00:43:03.813 verify=crc32c-intel 00:43:03.813 [job0] 00:43:03.813 filename=/dev/nvme0n1 00:43:03.813 [job1] 00:43:03.813 filename=/dev/nvme0n2 00:43:03.813 [job2] 00:43:03.813 filename=/dev/nvme0n3 00:43:03.813 [job3] 00:43:03.813 filename=/dev/nvme0n4 00:43:03.813 Could not set queue depth (nvme0n1) 00:43:03.813 Could not set queue depth (nvme0n2) 00:43:03.813 Could not set queue depth (nvme0n3) 00:43:03.813 Could not set queue depth (nvme0n4) 00:43:04.072 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:04.072 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:04.072 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:04.072 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:04.072 fio-3.35 00:43:04.072 Starting 4 threads 00:43:05.484 00:43:05.484 job0: (groupid=0, jobs=1): err= 0: pid=2183285: Tue Jun 11 03:39:46 2024 00:43:05.484 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:43:05.484 slat (nsec): min=10258, max=24941, avg=22834.82, stdev=2926.38 00:43:05.484 clat (usec): min=40840, max=41962, avg=41080.08, stdev=289.22 00:43:05.484 lat (usec): min=40863, max=41986, avg=41102.92, stdev=288.84 00:43:05.484 clat percentiles (usec): 00:43:05.484 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:43:05.484 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:05.484 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:43:05.484 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:05.484 | 99.99th=[42206] 00:43:05.484 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:43:05.484 slat (nsec): min=10750, max=41656, avg=12108.67, stdev=2141.87 00:43:05.484 clat (usec): min=152, max=443, avg=191.45, stdev=23.89 00:43:05.484 lat (usec): min=165, max=455, avg=203.56, stdev=24.35 00:43:05.484 clat percentiles (usec): 00:43:05.484 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 180], 00:43:05.484 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:43:05.484 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 215], 00:43:05.484 | 99.00th=[ 302], 99.50th=[ 351], 99.90th=[ 445], 99.95th=[ 445], 00:43:05.484 | 99.99th=[ 445] 00:43:05.484 bw ( KiB/s): min= 4096, max= 4096, per=20.34%, avg=4096.00, stdev= 0.00, samples=1 00:43:05.484 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:05.484 lat (usec) : 250=94.57%, 500=1.31% 00:43:05.484 lat (msec) : 50=4.12% 00:43:05.484 cpu : usr=0.59%, sys=0.79%, ctx=535, majf=0, minf=1 00:43:05.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:05.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.484 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:05.484 job1: (groupid=0, jobs=1): err= 0: pid=2183286: Tue Jun 11 03:39:46 2024 00:43:05.484 read: IOPS=20, BW=82.6KiB/s (84.6kB/s)(84.0KiB/1017msec) 00:43:05.484 slat (nsec): min=9922, max=23627, avg=22427.57, stdev=2879.86 00:43:05.484 clat (usec): min=40741, max=41998, avg=41188.30, stdev=408.65 00:43:05.484 lat (usec): min=40765, max=42021, avg=41210.73, stdev=408.01 00:43:05.484 clat percentiles (usec): 00:43:05.484 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:43:05.484 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:05.484 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:43:05.484 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:05.484 | 99.99th=[42206] 00:43:05.484 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:43:05.484 slat (usec): min=9, max=36649, avg=83.67, stdev=1619.17 00:43:05.484 clat (usec): min=149, max=366, avg=208.06, stdev=34.10 00:43:05.484 lat (usec): min=159, max=36908, avg=291.73, stdev=1621.82 00:43:05.484 clat percentiles (usec): 00:43:05.484 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 176], 00:43:05.484 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 202], 60.00th=[ 219], 00:43:05.484 | 70.00th=[ 235], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 260], 00:43:05.484 | 99.00th=[ 293], 99.50th=[ 343], 99.90th=[ 367], 99.95th=[ 367], 00:43:05.484 | 99.99th=[ 367] 00:43:05.484 bw ( KiB/s): min= 4096, max= 4096, per=20.34%, avg=4096.00, stdev= 0.00, samples=1 00:43:05.484 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:05.484 lat (usec) : 250=87.05%, 500=9.01% 00:43:05.484 lat (msec) : 50=3.94% 00:43:05.484 cpu : usr=0.30%, sys=0.79%, ctx=537, majf=0, minf=1 00:43:05.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:05.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.484 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:05.484 job2: (groupid=0, jobs=1): err= 0: pid=2183287: Tue Jun 11 03:39:46 2024 00:43:05.484 read: IOPS=1883, BW=7532KiB/s (7713kB/s)(7540KiB/1001msec) 00:43:05.484 slat (nsec): min=6511, max=29235, avg=7571.70, stdev=1077.95 00:43:05.484 clat (usec): min=235, max=573, avg=294.72, stdev=42.68 00:43:05.484 lat (usec): min=242, max=585, avg=302.29, stdev=42.83 00:43:05.484 clat percentiles (usec): 00:43:05.484 | 1.00th=[ 247], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 265], 00:43:05.484 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 297], 00:43:05.484 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 326], 95.00th=[ 371], 00:43:05.484 | 99.00th=[ 474], 99.50th=[ 478], 99.90th=[ 545], 99.95th=[ 578], 00:43:05.484 | 99.99th=[ 578] 00:43:05.484 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:43:05.484 slat (nsec): min=9636, max=38263, avg=10831.76, stdev=1372.44 00:43:05.484 clat (usec): min=153, max=684, avg=194.02, stdev=23.08 00:43:05.484 lat (usec): min=164, max=697, avg=204.85, stdev=23.28 00:43:05.484 clat percentiles (usec): 00:43:05.484 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:43:05.484 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 194], 00:43:05.484 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 239], 00:43:05.484 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 343], 99.95th=[ 482], 00:43:05.484 | 99.99th=[ 685] 00:43:05.484 bw ( KiB/s): min= 8192, max= 8192, per=40.68%, avg=8192.00, stdev= 0.00, samples=1 00:43:05.484 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:43:05.484 lat (usec) : 250=52.99%, 500=46.86%, 750=0.15% 00:43:05.484 cpu : usr=2.40%, sys=3.50%, ctx=3935, majf=0, minf=1 00:43:05.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:05.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.484 issued rwts: total=1885,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:05.484 job3: (groupid=0, jobs=1): err= 0: pid=2183288: Tue Jun 11 03:39:46 2024 00:43:05.484 read: IOPS=1691, BW=6765KiB/s (6928kB/s)(6772KiB/1001msec) 00:43:05.484 slat (nsec): min=7189, max=40378, avg=8205.12, stdev=1347.66 00:43:05.484 clat (usec): min=280, max=976, avg=325.23, stdev=24.91 00:43:05.484 lat (usec): min=288, max=985, avg=333.43, stdev=24.96 00:43:05.484 clat percentiles (usec): 00:43:05.484 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 310], 00:43:05.484 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 326], 00:43:05.484 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 347], 95.00th=[ 363], 00:43:05.484 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 449], 99.95th=[ 979], 00:43:05.484 | 99.99th=[ 979] 00:43:05.484 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:43:05.484 slat (nsec): min=10486, max=37842, avg=12158.68, stdev=1808.27 00:43:05.484 clat (usec): min=159, max=376, avg=194.15, stdev=16.54 00:43:05.484 lat (usec): min=170, max=389, avg=206.31, stdev=17.04 00:43:05.484 clat percentiles (usec): 00:43:05.484 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:43:05.484 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:43:05.484 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 225], 00:43:05.484 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 314], 99.95th=[ 334], 00:43:05.484 | 99.99th=[ 379] 00:43:05.484 bw ( KiB/s): min= 8192, max= 8192, per=40.68%, avg=8192.00, stdev= 0.00, samples=1 00:43:05.484 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:43:05.484 lat (usec) : 250=54.29%, 500=45.68%, 1000=0.03% 00:43:05.484 cpu : usr=4.00%, sys=5.30%, ctx=3742, majf=0, minf=2 00:43:05.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:05.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.484 issued rwts: total=1693,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:05.484 00:43:05.484 Run status group 0 (all jobs): 00:43:05.484 READ: bw=13.9MiB/s (14.6MB/s), 82.6KiB/s-7532KiB/s (84.6kB/s-7713kB/s), io=14.1MiB (14.8MB), run=1001-1017msec 00:43:05.484 WRITE: bw=19.7MiB/s (20.6MB/s), 2014KiB/s-8184KiB/s (2062kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1017msec 00:43:05.484 00:43:05.484 Disk stats (read/write): 00:43:05.484 nvme0n1: ios=69/512, merge=0/0, ticks=820/89, in_queue=909, util=87.17% 00:43:05.484 nvme0n2: ios=68/512, merge=0/0, ticks=925/104, in_queue=1029, util=91.16% 00:43:05.484 nvme0n3: ios=1558/1882, merge=0/0, ticks=1351/348, in_queue=1699, util=93.45% 00:43:05.484 nvme0n4: ios=1585/1599, merge=0/0, ticks=550/303, in_queue=853, util=95.08% 00:43:05.484 03:39:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:43:05.484 [global] 00:43:05.484 thread=1 00:43:05.484 invalidate=1 00:43:05.484 rw=randwrite 00:43:05.485 time_based=1 00:43:05.485 runtime=1 00:43:05.485 ioengine=libaio 00:43:05.485 direct=1 00:43:05.485 bs=4096 00:43:05.485 iodepth=1 00:43:05.485 norandommap=0 00:43:05.485 numjobs=1 00:43:05.485 00:43:05.485 verify_dump=1 00:43:05.485 verify_backlog=512 00:43:05.485 verify_state_save=0 00:43:05.485 do_verify=1 00:43:05.485 verify=crc32c-intel 00:43:05.485 [job0] 00:43:05.485 filename=/dev/nvme0n1 00:43:05.485 [job1] 00:43:05.485 filename=/dev/nvme0n2 00:43:05.485 [job2] 00:43:05.485 filename=/dev/nvme0n3 00:43:05.485 [job3] 00:43:05.485 filename=/dev/nvme0n4 00:43:05.485 Could not set queue depth (nvme0n1) 00:43:05.485 Could not set queue depth (nvme0n2) 00:43:05.485 Could not set queue depth (nvme0n3) 00:43:05.485 Could not set queue depth (nvme0n4) 00:43:05.783 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:05.783 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:05.783 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:05.783 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:05.783 fio-3.35 00:43:05.783 Starting 4 threads 00:43:06.726 00:43:06.726 job0: (groupid=0, jobs=1): err= 0: pid=2183663: Tue Jun 11 03:39:48 2024 00:43:06.726 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:43:06.726 slat (nsec): min=9086, max=22976, avg=22100.86, stdev=2912.94 00:43:06.726 clat (usec): min=40593, max=41697, avg=40982.99, stdev=188.55 00:43:06.726 lat (usec): min=40602, max=41720, avg=41005.09, stdev=189.88 00:43:06.726 clat percentiles (usec): 00:43:06.726 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:43:06.726 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:06.726 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:06.726 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:43:06.726 | 99.99th=[41681] 00:43:06.726 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:43:06.726 slat (nsec): min=8805, max=65577, avg=10046.40, stdev=2658.77 00:43:06.726 clat (usec): min=152, max=377, avg=213.05, stdev=26.06 00:43:06.726 lat (usec): min=161, max=442, avg=223.10, stdev=26.86 00:43:06.726 clat percentiles (usec): 00:43:06.726 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 192], 00:43:06.726 | 30.00th=[ 200], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:43:06.726 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 243], 95.00th=[ 258], 00:43:06.726 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 379], 99.95th=[ 379], 00:43:06.726 | 99.99th=[ 379] 00:43:06.726 bw ( KiB/s): min= 4087, max= 4087, per=29.05%, avg=4087.00, stdev= 0.00, samples=1 00:43:06.727 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:43:06.727 lat (usec) : 250=89.33%, 500=6.55% 00:43:06.727 lat (msec) : 50=4.12% 00:43:06.727 cpu : usr=0.39%, sys=0.39%, ctx=535, majf=0, minf=1 00:43:06.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:06.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.727 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:06.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:06.727 job1: (groupid=0, jobs=1): err= 0: pid=2183668: Tue Jun 11 03:39:48 2024 00:43:06.727 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:43:06.727 slat (nsec): min=10642, max=26396, avg=22114.82, stdev=2763.45 00:43:06.727 clat (usec): min=40833, max=41083, avg=40974.51, stdev=56.41 00:43:06.727 lat (usec): min=40860, max=41105, avg=40996.62, stdev=55.51 00:43:06.727 clat percentiles (usec): 00:43:06.727 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:43:06.727 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:06.727 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:06.727 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:06.727 | 99.99th=[41157] 00:43:06.727 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:43:06.727 slat (nsec): min=11361, max=36161, avg=12994.49, stdev=2260.17 00:43:06.727 clat (usec): min=163, max=310, avg=203.37, stdev=19.15 00:43:06.727 lat (usec): min=175, max=347, avg=216.37, stdev=19.72 00:43:06.727 clat percentiles (usec): 00:43:06.727 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:43:06.727 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:43:06.727 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 237], 00:43:06.727 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 310], 99.95th=[ 310], 00:43:06.727 | 99.99th=[ 310] 00:43:06.727 bw ( KiB/s): min= 4087, max= 4087, per=29.05%, avg=4087.00, stdev= 0.00, samples=1 00:43:06.727 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:43:06.727 lat (usec) : 250=94.01%, 500=1.87% 00:43:06.727 lat (msec) : 50=4.12% 00:43:06.727 cpu : usr=0.30%, sys=1.18%, ctx=536, majf=0, minf=2 00:43:06.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:06.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.727 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:06.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:06.727 job2: (groupid=0, jobs=1): err= 0: pid=2183677: Tue Jun 11 03:39:48 2024 00:43:06.727 read: IOPS=21, BW=86.4KiB/s (88.4kB/s)(88.0KiB/1019msec) 00:43:06.727 slat (nsec): min=10077, max=24612, avg=20360.55, stdev=4164.51 00:43:06.727 clat (usec): min=40851, max=41045, avg=40961.78, stdev=59.81 00:43:06.727 lat (usec): min=40862, max=41068, avg=40982.14, stdev=61.19 00:43:06.727 clat percentiles (usec): 00:43:06.727 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:43:06.727 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:06.727 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:06.727 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:43:06.727 | 99.99th=[41157] 00:43:06.727 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:43:06.727 slat (nsec): min=10208, max=37235, avg=11652.81, stdev=1747.67 00:43:06.727 clat (usec): min=170, max=325, avg=213.78, stdev=26.20 00:43:06.727 lat (usec): min=183, max=350, avg=225.44, stdev=26.66 00:43:06.727 clat percentiles (usec): 00:43:06.727 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:43:06.727 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:43:06.727 | 70.00th=[ 219], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 269], 00:43:06.727 | 99.00th=[ 314], 99.50th=[ 326], 99.90th=[ 326], 99.95th=[ 326], 00:43:06.727 | 99.99th=[ 326] 00:43:06.727 bw ( KiB/s): min= 4096, max= 4096, per=29.11%, avg=4096.00, stdev= 0.00, samples=1 00:43:06.727 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:06.727 lat (usec) : 250=88.20%, 500=7.68% 00:43:06.727 lat (msec) : 50=4.12% 00:43:06.727 cpu : usr=0.29%, sys=1.08%, ctx=534, majf=0, minf=1 00:43:06.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:06.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.727 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:06.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:06.727 job3: (groupid=0, jobs=1): err= 0: pid=2183681: Tue Jun 11 03:39:48 2024 00:43:06.727 read: IOPS=2039, BW=8160KiB/s (8356kB/s)(8168KiB/1001msec) 00:43:06.727 slat (nsec): min=6472, max=41844, avg=8708.42, stdev=1755.26 00:43:06.727 clat (usec): min=222, max=565, avg=269.88, stdev=46.53 00:43:06.727 lat (usec): min=230, max=574, avg=278.59, stdev=46.79 00:43:06.727 clat percentiles (usec): 00:43:06.727 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:43:06.727 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:43:06.727 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 433], 00:43:06.727 | 99.00th=[ 465], 99.50th=[ 482], 99.90th=[ 537], 99.95th=[ 553], 00:43:06.727 | 99.99th=[ 570] 00:43:06.727 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:43:06.727 slat (nsec): min=9559, max=42097, avg=12732.43, stdev=2353.39 00:43:06.727 clat (usec): min=153, max=419, avg=191.18, stdev=23.24 00:43:06.727 lat (usec): min=163, max=452, avg=203.91, stdev=23.87 00:43:06.727 clat percentiles (usec): 00:43:06.727 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 176], 00:43:06.727 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 190], 00:43:06.727 | 70.00th=[ 196], 80.00th=[ 208], 90.00th=[ 225], 95.00th=[ 237], 00:43:06.727 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 326], 99.95th=[ 359], 00:43:06.727 | 99.99th=[ 420] 00:43:06.727 bw ( KiB/s): min= 8175, max= 8175, per=58.11%, avg=8175.00, stdev= 0.00, samples=1 00:43:06.727 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:43:06.727 lat (usec) : 250=60.86%, 500=39.02%, 750=0.12% 00:43:06.727 cpu : usr=3.50%, sys=6.30%, ctx=4093, majf=0, minf=1 00:43:06.727 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:06.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:06.727 issued rwts: total=2042,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:06.727 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:06.727 00:43:06.727 Run status group 0 (all jobs): 00:43:06.727 READ: bw=8275KiB/s (8473kB/s), 86.4KiB/s-8160KiB/s (88.4kB/s-8356kB/s), io=8432KiB (8634kB), run=1001-1019msec 00:43:06.727 WRITE: bw=13.7MiB/s (14.4MB/s), 2010KiB/s-8184KiB/s (2058kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1019msec 00:43:06.727 00:43:06.727 Disk stats (read/write): 00:43:06.727 nvme0n1: ios=68/512, merge=0/0, ticks=935/110, in_queue=1045, util=91.38% 00:43:06.727 nvme0n2: ios=55/512, merge=0/0, ticks=1688/94, in_queue=1782, util=96.45% 00:43:06.727 nvme0n3: ios=17/512, merge=0/0, ticks=697/101, in_queue=798, util=89.07% 00:43:06.727 nvme0n4: ios=1578/1976, merge=0/0, ticks=1398/342, in_queue=1740, util=98.64% 00:43:06.985 03:39:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:43:06.985 [global] 00:43:06.985 thread=1 00:43:06.985 invalidate=1 00:43:06.985 rw=write 00:43:06.985 time_based=1 00:43:06.985 runtime=1 00:43:06.985 ioengine=libaio 00:43:06.985 direct=1 00:43:06.985 bs=4096 00:43:06.985 iodepth=128 00:43:06.985 norandommap=0 00:43:06.985 numjobs=1 00:43:06.985 00:43:06.985 verify_dump=1 00:43:06.985 verify_backlog=512 00:43:06.985 verify_state_save=0 00:43:06.985 do_verify=1 00:43:06.985 verify=crc32c-intel 00:43:06.985 [job0] 00:43:06.985 filename=/dev/nvme0n1 00:43:06.985 [job1] 00:43:06.985 filename=/dev/nvme0n2 00:43:06.985 [job2] 00:43:06.985 filename=/dev/nvme0n3 00:43:06.985 [job3] 00:43:06.985 filename=/dev/nvme0n4 00:43:06.985 Could not set queue depth (nvme0n1) 00:43:06.985 Could not set queue depth (nvme0n2) 00:43:06.985 Could not set queue depth (nvme0n3) 00:43:06.985 Could not set queue depth (nvme0n4) 00:43:07.243 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:07.243 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:07.243 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:07.243 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:07.243 fio-3.35 00:43:07.243 Starting 4 threads 00:43:08.639 00:43:08.639 job0: (groupid=0, jobs=1): err= 0: pid=2184082: Tue Jun 11 03:39:49 2024 00:43:08.639 read: IOPS=3156, BW=12.3MiB/s (12.9MB/s)(13.0MiB/1051msec) 00:43:08.639 slat (nsec): min=1657, max=16068k, avg=134053.56, stdev=952236.01 00:43:08.639 clat (usec): min=8361, max=62743, avg=17492.06, stdev=9456.63 00:43:08.639 lat (usec): min=8367, max=62745, avg=17626.12, stdev=9512.00 00:43:08.639 clat percentiles (usec): 00:43:08.639 | 1.00th=[ 9241], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[11076], 00:43:08.639 | 30.00th=[11469], 40.00th=[12387], 50.00th=[15008], 60.00th=[17957], 00:43:08.639 | 70.00th=[19268], 80.00th=[20055], 90.00th=[27132], 95.00th=[39060], 00:43:08.639 | 99.00th=[52691], 99.50th=[52691], 99.90th=[62653], 99.95th=[62653], 00:43:08.639 | 99.99th=[62653] 00:43:08.639 write: IOPS=3410, BW=13.3MiB/s (14.0MB/s)(14.0MiB/1051msec); 0 zone resets 00:43:08.639 slat (usec): min=2, max=14473, avg=149.09, stdev=886.90 00:43:08.639 clat (usec): min=1583, max=51102, avg=20994.14, stdev=10846.56 00:43:08.639 lat (usec): min=1612, max=51109, avg=21143.23, stdev=10930.15 00:43:08.639 clat percentiles (usec): 00:43:08.639 | 1.00th=[ 6194], 5.00th=[ 7242], 10.00th=[ 9241], 20.00th=[12780], 00:43:08.639 | 30.00th=[15008], 40.00th=[17695], 50.00th=[19268], 60.00th=[19792], 00:43:08.639 | 70.00th=[20841], 80.00th=[26870], 90.00th=[42206], 95.00th=[45876], 00:43:08.639 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50594], 99.95th=[51119], 00:43:08.639 | 99.99th=[51119] 00:43:08.639 bw ( KiB/s): min=12288, max=16384, per=21.15%, avg=14336.00, stdev=2896.31, samples=2 00:43:08.639 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:43:08.639 lat (msec) : 2=0.13%, 10=13.62%, 20=58.50%, 50=25.60%, 100=2.14% 00:43:08.639 cpu : usr=4.10%, sys=4.10%, ctx=307, majf=0, minf=1 00:43:08.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:43:08.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:08.639 issued rwts: total=3318,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.639 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:08.639 job1: (groupid=0, jobs=1): err= 0: pid=2184094: Tue Jun 11 03:39:49 2024 00:43:08.639 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:43:08.639 slat (nsec): min=1859, max=29688k, avg=138578.04, stdev=1125104.44 00:43:08.639 clat (usec): min=6715, max=89876, avg=16359.83, stdev=13308.25 00:43:08.639 lat (usec): min=6723, max=89905, avg=16498.41, stdev=13441.21 00:43:08.639 clat percentiles (usec): 00:43:08.639 | 1.00th=[ 7898], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[ 9896], 00:43:08.639 | 30.00th=[10290], 40.00th=[11076], 50.00th=[12125], 60.00th=[12256], 00:43:08.639 | 70.00th=[13042], 80.00th=[14877], 90.00th=[33817], 95.00th=[44827], 00:43:08.639 | 99.00th=[71828], 99.50th=[84411], 99.90th=[88605], 99.95th=[88605], 00:43:08.639 | 99.99th=[89654] 00:43:08.639 write: IOPS=3983, BW=15.6MiB/s (16.3MB/s)(15.7MiB/1012msec); 0 zone resets 00:43:08.639 slat (usec): min=2, max=28354, avg=119.01, stdev=880.98 00:43:08.639 clat (usec): min=4424, max=88899, avg=17099.77, stdev=11648.14 00:43:08.639 lat (usec): min=5505, max=88911, avg=17218.78, stdev=11714.27 00:43:08.639 clat percentiles (usec): 00:43:08.639 | 1.00th=[ 7111], 5.00th=[ 9634], 10.00th=[ 9765], 20.00th=[10028], 00:43:08.639 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11863], 60.00th=[12649], 00:43:08.639 | 70.00th=[18482], 80.00th=[19792], 90.00th=[34866], 95.00th=[43254], 00:43:08.639 | 99.00th=[56886], 99.50th=[58983], 99.90th=[72877], 99.95th=[72877], 00:43:08.639 | 99.99th=[88605] 00:43:08.639 bw ( KiB/s): min=14840, max=16384, per=23.03%, avg=15612.00, stdev=1091.77, samples=2 00:43:08.639 iops : min= 3710, max= 4096, avg=3903.00, stdev=272.94, samples=2 00:43:08.639 lat (msec) : 10=22.06%, 20=60.76%, 50=12.97%, 100=4.20% 00:43:08.639 cpu : usr=3.76%, sys=5.24%, ctx=291, majf=0, minf=1 00:43:08.639 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:43:08.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.639 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:08.639 issued rwts: total=3584,4031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.639 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:08.639 job2: (groupid=0, jobs=1): err= 0: pid=2184110: Tue Jun 11 03:39:49 2024 00:43:08.639 read: IOPS=5519, BW=21.6MiB/s (22.6MB/s)(21.8MiB/1010msec) 00:43:08.640 slat (nsec): min=1320, max=16286k, avg=100502.44, stdev=771585.61 00:43:08.640 clat (usec): min=3205, max=33223, avg=12331.69, stdev=4003.45 00:43:08.640 lat (usec): min=3212, max=33241, avg=12432.19, stdev=4061.06 00:43:08.640 clat percentiles (usec): 00:43:08.640 | 1.00th=[ 4686], 5.00th=[ 7767], 10.00th=[ 8717], 20.00th=[ 9372], 00:43:08.640 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11731], 00:43:08.640 | 70.00th=[12518], 80.00th=[15401], 90.00th=[17171], 95.00th=[19268], 00:43:08.640 | 99.00th=[28443], 99.50th=[30540], 99.90th=[32375], 99.95th=[32375], 00:43:08.640 | 99.99th=[33162] 00:43:08.640 write: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec); 0 zone resets 00:43:08.640 slat (usec): min=2, max=9342, avg=73.37, stdev=374.80 00:43:08.640 clat (usec): min=1593, max=31794, avg=10528.25, stdev=3133.47 00:43:08.640 lat (usec): min=1604, max=31797, avg=10601.62, stdev=3161.03 00:43:08.640 clat percentiles (usec): 00:43:08.640 | 1.00th=[ 3261], 5.00th=[ 5014], 10.00th=[ 7111], 20.00th=[ 8979], 00:43:08.640 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10945], 60.00th=[11338], 00:43:08.640 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11994], 95.00th=[15139], 00:43:08.640 | 99.00th=[23987], 99.50th=[24511], 99.90th=[25297], 99.95th=[25560], 00:43:08.640 | 99.99th=[31851] 00:43:08.640 bw ( KiB/s): min=20480, max=24576, per=33.23%, avg=22528.00, stdev=2896.31, samples=2 00:43:08.640 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:43:08.640 lat (msec) : 2=0.03%, 4=1.49%, 10=32.19%, 20=63.09%, 50=3.19% 00:43:08.640 cpu : usr=4.26%, sys=6.05%, ctx=695, majf=0, minf=1 00:43:08.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:43:08.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:08.640 issued rwts: total=5575,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.640 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:08.640 job3: (groupid=0, jobs=1): err= 0: pid=2184115: Tue Jun 11 03:39:49 2024 00:43:08.640 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:43:08.640 slat (nsec): min=1131, max=16626k, avg=119556.76, stdev=988042.90 00:43:08.640 clat (usec): min=1576, max=46219, avg=16218.11, stdev=7087.48 00:43:08.640 lat (usec): min=3807, max=46226, avg=16337.67, stdev=7142.01 00:43:08.640 clat percentiles (usec): 00:43:08.640 | 1.00th=[ 4883], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10945], 00:43:08.640 | 30.00th=[11207], 40.00th=[11600], 50.00th=[13042], 60.00th=[17695], 00:43:08.640 | 70.00th=[19268], 80.00th=[20055], 90.00th=[29230], 95.00th=[29754], 00:43:08.640 | 99.00th=[40109], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:43:08.640 | 99.99th=[46400] 00:43:08.640 write: IOPS=4516, BW=17.6MiB/s (18.5MB/s)(17.8MiB/1011msec); 0 zone resets 00:43:08.640 slat (nsec): min=1884, max=21126k, avg=86885.52, stdev=782063.25 00:43:08.640 clat (usec): min=890, max=42879, avg=12850.56, stdev=5485.87 00:43:08.640 lat (usec): min=898, max=42921, avg=12937.44, stdev=5548.14 00:43:08.640 clat percentiles (usec): 00:43:08.640 | 1.00th=[ 2835], 5.00th=[ 4817], 10.00th=[ 6194], 20.00th=[ 8979], 00:43:08.640 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:43:08.640 | 70.00th=[14746], 80.00th=[17957], 90.00th=[19268], 95.00th=[22414], 00:43:08.640 | 99.00th=[30540], 99.50th=[34341], 99.90th=[40633], 99.95th=[40633], 00:43:08.640 | 99.99th=[42730] 00:43:08.640 bw ( KiB/s): min=12288, max=23224, per=26.19%, avg=17756.00, stdev=7732.92, samples=2 00:43:08.640 iops : min= 3072, max= 5806, avg=4439.00, stdev=1933.23, samples=2 00:43:08.640 lat (usec) : 1000=0.05% 00:43:08.640 lat (msec) : 2=0.17%, 4=1.87%, 10=13.37%, 20=69.91%, 50=14.63% 00:43:08.640 cpu : usr=2.38%, sys=5.05%, ctx=461, majf=0, minf=1 00:43:08.640 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:43:08.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:08.640 issued rwts: total=4096,4566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.640 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:08.640 00:43:08.640 Run status group 0 (all jobs): 00:43:08.640 READ: bw=61.6MiB/s (64.6MB/s), 12.3MiB/s-21.6MiB/s (12.9MB/s-22.6MB/s), io=64.7MiB (67.9MB), run=1010-1051msec 00:43:08.640 WRITE: bw=66.2MiB/s (69.4MB/s), 13.3MiB/s-21.8MiB/s (14.0MB/s-22.8MB/s), io=69.6MiB (73.0MB), run=1010-1051msec 00:43:08.640 00:43:08.640 Disk stats (read/write): 00:43:08.640 nvme0n1: ios=2735/3072, merge=0/0, ticks=43305/61242, in_queue=104547, util=86.97% 00:43:08.640 nvme0n2: ios=3122/3543, merge=0/0, ticks=26400/24742, in_queue=51142, util=98.17% 00:43:08.640 nvme0n3: ios=4608/4887, merge=0/0, ticks=54858/50332, in_queue=105190, util=88.98% 00:43:08.640 nvme0n4: ios=3338/3584, merge=0/0, ticks=53447/44321, in_queue=97768, util=98.12% 00:43:08.640 03:39:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:43:08.640 [global] 00:43:08.640 thread=1 00:43:08.640 invalidate=1 00:43:08.640 rw=randwrite 00:43:08.640 time_based=1 00:43:08.640 runtime=1 00:43:08.640 ioengine=libaio 00:43:08.640 direct=1 00:43:08.640 bs=4096 00:43:08.640 iodepth=128 00:43:08.640 norandommap=0 00:43:08.640 numjobs=1 00:43:08.640 00:43:08.640 verify_dump=1 00:43:08.640 verify_backlog=512 00:43:08.640 verify_state_save=0 00:43:08.640 do_verify=1 00:43:08.640 verify=crc32c-intel 00:43:08.640 [job0] 00:43:08.640 filename=/dev/nvme0n1 00:43:08.640 [job1] 00:43:08.640 filename=/dev/nvme0n2 00:43:08.640 [job2] 00:43:08.640 filename=/dev/nvme0n3 00:43:08.640 [job3] 00:43:08.640 filename=/dev/nvme0n4 00:43:08.640 Could not set queue depth (nvme0n1) 00:43:08.640 Could not set queue depth (nvme0n2) 00:43:08.640 Could not set queue depth (nvme0n3) 00:43:08.640 Could not set queue depth (nvme0n4) 00:43:08.901 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:08.901 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:08.901 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:08.901 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:08.901 fio-3.35 00:43:08.901 Starting 4 threads 00:43:10.271 00:43:10.271 job0: (groupid=0, jobs=1): err= 0: pid=2184535: Tue Jun 11 03:39:51 2024 00:43:10.271 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:43:10.271 slat (nsec): min=1253, max=11458k, avg=124625.07, stdev=821059.42 00:43:10.271 clat (usec): min=5921, max=42886, avg=14349.86, stdev=5520.86 00:43:10.271 lat (usec): min=5928, max=42895, avg=14474.48, stdev=5593.82 00:43:10.271 clat percentiles (usec): 00:43:10.271 | 1.00th=[ 6783], 5.00th=[11207], 10.00th=[11469], 20.00th=[11600], 00:43:10.271 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12387], 60.00th=[12518], 00:43:10.271 | 70.00th=[13173], 80.00th=[15270], 90.00th=[21890], 95.00th=[27395], 00:43:10.271 | 99.00th=[36439], 99.50th=[40633], 99.90th=[42730], 99.95th=[42730], 00:43:10.271 | 99.99th=[42730] 00:43:10.271 write: IOPS=3671, BW=14.3MiB/s (15.0MB/s)(14.5MiB/1010msec); 0 zone resets 00:43:10.271 slat (usec): min=2, max=12014, avg=144.26, stdev=696.19 00:43:10.271 clat (usec): min=727, max=44594, avg=20664.03, stdev=9975.92 00:43:10.271 lat (usec): min=1584, max=44600, avg=20808.30, stdev=10045.67 00:43:10.271 clat percentiles (usec): 00:43:10.271 | 1.00th=[ 4178], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10683], 00:43:10.271 | 30.00th=[12387], 40.00th=[16581], 50.00th=[20579], 60.00th=[21627], 00:43:10.272 | 70.00th=[23987], 80.00th=[30802], 90.00th=[36963], 95.00th=[38536], 00:43:10.272 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:43:10.272 | 99.99th=[44827] 00:43:10.272 bw ( KiB/s): min=12520, max=16384, per=20.46%, avg=14452.00, stdev=2732.26, samples=2 00:43:10.272 iops : min= 3130, max= 4096, avg=3613.00, stdev=683.07, samples=2 00:43:10.272 lat (usec) : 750=0.01% 00:43:10.272 lat (msec) : 2=0.03%, 4=0.33%, 10=8.05%, 20=58.31%, 50=33.27% 00:43:10.272 cpu : usr=2.48%, sys=4.36%, ctx=401, majf=0, minf=1 00:43:10.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:43:10.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:10.272 issued rwts: total=3584,3708,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:10.272 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:10.272 job1: (groupid=0, jobs=1): err= 0: pid=2184551: Tue Jun 11 03:39:51 2024 00:43:10.272 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:43:10.272 slat (nsec): min=1046, max=19995k, avg=135465.95, stdev=887374.81 00:43:10.272 clat (usec): min=8374, max=49055, avg=16546.88, stdev=5384.03 00:43:10.272 lat (usec): min=8380, max=49079, avg=16682.35, stdev=5458.27 00:43:10.272 clat percentiles (usec): 00:43:10.272 | 1.00th=[ 9896], 5.00th=[12387], 10.00th=[13173], 20.00th=[13829], 00:43:10.272 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:43:10.272 | 70.00th=[15664], 80.00th=[17957], 90.00th=[21103], 95.00th=[31327], 00:43:10.272 | 99.00th=[36439], 99.50th=[36439], 99.90th=[41157], 99.95th=[46924], 00:43:10.272 | 99.99th=[49021] 00:43:10.272 write: IOPS=2596, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1007msec); 0 zone resets 00:43:10.272 slat (nsec): min=1907, max=20133k, avg=244888.38, stdev=1213851.58 00:43:10.272 clat (msec): min=5, max=103, avg=32.50, stdev=19.23 00:43:10.272 lat (msec): min=7, max=103, avg=32.74, stdev=19.34 00:43:10.272 clat percentiles (msec): 00:43:10.272 | 1.00th=[ 9], 5.00th=[ 17], 10.00th=[ 19], 20.00th=[ 21], 00:43:10.272 | 30.00th=[ 22], 40.00th=[ 22], 50.00th=[ 23], 60.00th=[ 26], 00:43:10.272 | 70.00th=[ 39], 80.00th=[ 47], 90.00th=[ 58], 95.00th=[ 73], 00:43:10.272 | 99.00th=[ 97], 99.50th=[ 101], 99.90th=[ 104], 99.95th=[ 104], 00:43:10.272 | 99.99th=[ 104] 00:43:10.272 bw ( KiB/s): min=10184, max=10296, per=14.49%, avg=10240.00, stdev=79.20, samples=2 00:43:10.272 iops : min= 2546, max= 2574, avg=2560.00, stdev=19.80, samples=2 00:43:10.272 lat (msec) : 10=1.76%, 20=48.23%, 50=41.12%, 100=8.48%, 250=0.41% 00:43:10.272 cpu : usr=1.29%, sys=2.98%, ctx=339, majf=0, minf=1 00:43:10.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:43:10.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:10.272 issued rwts: total=2560,2615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:10.272 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:10.272 job2: (groupid=0, jobs=1): err= 0: pid=2184569: Tue Jun 11 03:39:51 2024 00:43:10.272 read: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec) 00:43:10.272 slat (nsec): min=1300, max=14155k, avg=91147.95, stdev=690740.67 00:43:10.272 clat (usec): min=1117, max=31450, avg=11504.25, stdev=3134.82 00:43:10.272 lat (usec): min=1124, max=31513, avg=11595.40, stdev=3193.33 00:43:10.272 clat percentiles (usec): 00:43:10.272 | 1.00th=[ 3982], 5.00th=[ 7701], 10.00th=[ 8455], 20.00th=[ 9896], 00:43:10.272 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:43:10.272 | 70.00th=[11600], 80.00th=[13435], 90.00th=[16909], 95.00th=[18220], 00:43:10.272 | 99.00th=[19792], 99.50th=[20055], 99.90th=[21365], 99.95th=[27132], 00:43:10.272 | 99.99th=[31327] 00:43:10.272 write: IOPS=5837, BW=22.8MiB/s (23.9MB/s)(23.1MiB/1011msec); 0 zone resets 00:43:10.272 slat (usec): min=2, max=13001, avg=74.32, stdev=469.09 00:43:10.272 clat (usec): min=1150, max=28476, avg=10747.57, stdev=3274.10 00:43:10.272 lat (usec): min=1160, max=28485, avg=10821.89, stdev=3299.28 00:43:10.272 clat percentiles (usec): 00:43:10.272 | 1.00th=[ 3195], 5.00th=[ 5276], 10.00th=[ 6652], 20.00th=[ 8979], 00:43:10.272 | 30.00th=[10159], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:43:10.272 | 70.00th=[11338], 80.00th=[11600], 90.00th=[14222], 95.00th=[16188], 00:43:10.272 | 99.00th=[23462], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:43:10.272 | 99.99th=[28443] 00:43:10.272 bw ( KiB/s): min=21712, max=24488, per=32.70%, avg=23100.00, stdev=1962.93, samples=2 00:43:10.272 iops : min= 5428, max= 6122, avg=5775.00, stdev=490.73, samples=2 00:43:10.272 lat (msec) : 2=0.04%, 4=1.77%, 10=22.81%, 20=73.76%, 50=1.62% 00:43:10.272 cpu : usr=4.65%, sys=6.24%, ctx=624, majf=0, minf=1 00:43:10.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:43:10.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:10.272 issued rwts: total=5632,5902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:10.272 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:10.272 job3: (groupid=0, jobs=1): err= 0: pid=2184575: Tue Jun 11 03:39:51 2024 00:43:10.272 read: IOPS=5310, BW=20.7MiB/s (21.8MB/s)(21.0MiB/1010msec) 00:43:10.272 slat (nsec): min=1295, max=15748k, avg=98354.93, stdev=736010.82 00:43:10.272 clat (usec): min=4177, max=35681, avg=12134.15, stdev=3410.27 00:43:10.272 lat (usec): min=4184, max=35707, avg=12232.50, stdev=3464.05 00:43:10.272 clat percentiles (usec): 00:43:10.272 | 1.00th=[ 4686], 5.00th=[ 8094], 10.00th=[ 9372], 20.00th=[10159], 00:43:10.272 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11469], 00:43:10.272 | 70.00th=[12518], 80.00th=[14222], 90.00th=[16909], 95.00th=[19530], 00:43:10.272 | 99.00th=[21627], 99.50th=[28705], 99.90th=[28705], 99.95th=[28705], 00:43:10.272 | 99.99th=[35914] 00:43:10.272 write: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec); 0 zone resets 00:43:10.272 slat (usec): min=2, max=34280, avg=78.24, stdev=627.00 00:43:10.272 clat (usec): min=1905, max=45213, avg=11171.94, stdev=5406.14 00:43:10.272 lat (usec): min=1911, max=45224, avg=11250.18, stdev=5428.24 00:43:10.272 clat percentiles (usec): 00:43:10.272 | 1.00th=[ 3032], 5.00th=[ 4883], 10.00th=[ 6980], 20.00th=[ 8848], 00:43:10.272 | 30.00th=[10159], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:43:10.272 | 70.00th=[11207], 80.00th=[11338], 90.00th=[14746], 95.00th=[17433], 00:43:10.272 | 99.00th=[40633], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:43:10.272 | 99.99th=[45351] 00:43:10.272 bw ( KiB/s): min=20480, max=24576, per=31.89%, avg=22528.00, stdev=2896.31, samples=2 00:43:10.272 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:43:10.272 lat (msec) : 2=0.09%, 4=1.39%, 10=20.88%, 20=73.87%, 50=3.77% 00:43:10.272 cpu : usr=5.05%, sys=6.14%, ctx=632, majf=0, minf=1 00:43:10.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:43:10.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:10.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:10.272 issued rwts: total=5364,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:10.272 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:10.272 00:43:10.272 Run status group 0 (all jobs): 00:43:10.272 READ: bw=66.2MiB/s (69.4MB/s), 9.93MiB/s-21.8MiB/s (10.4MB/s-22.8MB/s), io=67.0MiB (70.2MB), run=1007-1011msec 00:43:10.272 WRITE: bw=69.0MiB/s (72.3MB/s), 10.1MiB/s-22.8MiB/s (10.6MB/s-23.9MB/s), io=69.8MiB (73.1MB), run=1007-1011msec 00:43:10.272 00:43:10.272 Disk stats (read/write): 00:43:10.272 nvme0n1: ios=3122/3135, merge=0/0, ticks=41970/63711, in_queue=105681, util=90.08% 00:43:10.272 nvme0n2: ios=2068/2215, merge=0/0, ticks=16022/37446, in_queue=53468, util=96.75% 00:43:10.272 nvme0n3: ios=4665/5119, merge=0/0, ticks=50934/53586, in_queue=104520, util=90.84% 00:43:10.272 nvme0n4: ios=4626/4671, merge=0/0, ticks=53963/47931, in_queue=101894, util=98.43% 00:43:10.272 03:39:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:43:10.272 03:39:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2184674 00:43:10.272 03:39:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:43:10.272 03:39:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:43:10.272 [global] 00:43:10.272 thread=1 00:43:10.272 invalidate=1 00:43:10.272 rw=read 00:43:10.272 time_based=1 00:43:10.272 runtime=10 00:43:10.272 ioengine=libaio 00:43:10.272 direct=1 00:43:10.272 bs=4096 00:43:10.272 iodepth=1 00:43:10.272 norandommap=1 00:43:10.272 numjobs=1 00:43:10.272 00:43:10.272 [job0] 00:43:10.272 filename=/dev/nvme0n1 00:43:10.272 [job1] 00:43:10.272 filename=/dev/nvme0n2 00:43:10.272 [job2] 00:43:10.272 filename=/dev/nvme0n3 00:43:10.272 [job3] 00:43:10.272 filename=/dev/nvme0n4 00:43:10.272 Could not set queue depth (nvme0n1) 00:43:10.272 Could not set queue depth (nvme0n2) 00:43:10.272 Could not set queue depth (nvme0n3) 00:43:10.272 Could not set queue depth (nvme0n4) 00:43:10.272 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:10.272 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:10.272 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:10.272 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:10.272 fio-3.35 00:43:10.272 Starting 4 threads 00:43:13.546 03:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:43:13.546 03:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:43:13.546 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=22351872, buflen=4096 00:43:13.546 fio: pid=2184999, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:43:13.546 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=323584, buflen=4096 00:43:13.546 fio: pid=2184998, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:43:13.546 03:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:13.546 03:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:43:13.546 03:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:13.546 03:39:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:43:13.546 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=35721216, buflen=4096 00:43:13.546 fio: pid=2184996, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:43:13.804 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=323584, buflen=4096 00:43:13.804 fio: pid=2184997, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:43:13.804 03:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:13.804 03:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:43:13.804 00:43:13.804 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2184996: Tue Jun 11 03:39:55 2024 00:43:13.804 read: IOPS=2822, BW=11.0MiB/s (11.6MB/s)(34.1MiB/3090msec) 00:43:13.804 slat (nsec): min=6751, max=75916, avg=7734.51, stdev=1748.26 00:43:13.804 clat (usec): min=208, max=41168, avg=342.13, stdev=1629.08 00:43:13.804 lat (usec): min=226, max=41188, avg=349.86, stdev=1629.80 00:43:13.804 clat percentiles (usec): 00:43:13.804 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 258], 00:43:13.804 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:43:13.804 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 314], 00:43:13.804 | 99.00th=[ 338], 99.50th=[ 375], 99.90th=[41157], 99.95th=[41157], 00:43:13.804 | 99.99th=[41157] 00:43:13.804 bw ( KiB/s): min=12646, max=14784, per=79.12%, avg=13926.00, stdev=859.90, samples=5 00:43:13.804 iops : min= 3161, max= 3696, avg=3481.40, stdev=215.16, samples=5 00:43:13.804 lat (usec) : 250=7.93%, 500=91.86%, 750=0.02% 00:43:13.804 lat (msec) : 10=0.01%, 50=0.16% 00:43:13.804 cpu : usr=1.55%, sys=4.44%, ctx=8725, majf=0, minf=1 00:43:13.804 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:13.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.804 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.804 issued rwts: total=8722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:13.804 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:13.804 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2184997: Tue Jun 11 03:39:55 2024 00:43:13.804 read: IOPS=24, BW=97.0KiB/s (99.3kB/s)(316KiB/3258msec) 00:43:13.804 slat (usec): min=7, max=21820, avg=399.25, stdev=2553.62 00:43:13.804 clat (usec): min=511, max=42288, avg=40572.71, stdev=4574.69 00:43:13.804 lat (usec): min=542, max=63068, avg=40976.86, stdev=5313.00 00:43:13.804 clat percentiles (usec): 00:43:13.804 | 1.00th=[ 510], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:43:13.804 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:13.804 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:43:13.804 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:13.804 | 99.99th=[42206] 00:43:13.804 bw ( KiB/s): min= 93, max= 103, per=0.55%, avg=96.67, stdev= 3.33, samples=6 00:43:13.804 iops : min= 23, max= 25, avg=24.00, stdev= 0.63, samples=6 00:43:13.804 lat (usec) : 750=1.25% 00:43:13.804 lat (msec) : 50=97.50% 00:43:13.804 cpu : usr=0.00%, sys=0.03%, ctx=85, majf=0, minf=1 00:43:13.804 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:13.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.804 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.804 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:13.804 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:13.804 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2184998: Tue Jun 11 03:39:55 2024 00:43:13.804 read: IOPS=27, BW=109KiB/s (112kB/s)(316KiB/2897msec) 00:43:13.804 slat (usec): min=9, max=156, avg=19.87, stdev=25.03 00:43:13.804 clat (usec): min=336, max=42026, avg=36385.33, stdev=12990.05 00:43:13.804 lat (usec): min=349, max=42122, avg=36405.17, stdev=12991.72 00:43:13.804 clat percentiles (usec): 00:43:13.804 | 1.00th=[ 338], 5.00th=[ 363], 10.00th=[ 429], 20.00th=[40633], 00:43:13.804 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:13.804 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:43:13.804 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:13.804 | 99.99th=[42206] 00:43:13.804 bw ( KiB/s): min= 96, max= 127, per=0.62%, avg=110.20, stdev=11.50, samples=5 00:43:13.804 iops : min= 24, max= 31, avg=27.40, stdev= 2.61, samples=5 00:43:13.804 lat (usec) : 500=10.00%, 750=1.25% 00:43:13.804 lat (msec) : 50=87.50% 00:43:13.804 cpu : usr=0.10%, sys=0.00%, ctx=80, majf=0, minf=1 00:43:13.804 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:13.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.804 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.804 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:13.804 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:13.804 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2184999: Tue Jun 11 03:39:55 2024 00:43:13.805 read: IOPS=1995, BW=7981KiB/s (8173kB/s)(21.3MiB/2735msec) 00:43:13.805 slat (nsec): min=7033, max=44675, avg=8065.60, stdev=1713.07 00:43:13.805 clat (usec): min=232, max=42048, avg=486.27, stdev=2912.08 00:43:13.805 lat (usec): min=240, max=42071, avg=494.33, stdev=2913.02 00:43:13.805 clat percentiles (usec): 00:43:13.805 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 265], 00:43:13.805 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:43:13.805 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 293], 95.00th=[ 302], 00:43:13.805 | 99.00th=[ 322], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:43:13.805 | 99.99th=[42206] 00:43:13.805 bw ( KiB/s): min= 104, max=13968, per=49.53%, avg=8717.60, stdev=7179.17, samples=5 00:43:13.805 iops : min= 26, max= 3492, avg=2179.40, stdev=1794.79, samples=5 00:43:13.805 lat (usec) : 250=1.32%, 500=98.08%, 750=0.05% 00:43:13.805 lat (msec) : 2=0.02%, 50=0.51% 00:43:13.805 cpu : usr=0.88%, sys=3.40%, ctx=5458, majf=0, minf=2 00:43:13.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:13.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.805 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:13.805 issued rwts: total=5458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:13.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:13.805 00:43:13.805 Run status group 0 (all jobs): 00:43:13.805 READ: bw=17.2MiB/s (18.0MB/s), 97.0KiB/s-11.0MiB/s (99.3kB/s-11.6MB/s), io=56.0MiB (58.7MB), run=2735-3258msec 00:43:13.805 00:43:13.805 Disk stats (read/write): 00:43:13.805 nvme0n1: ios=8715/0, merge=0/0, ticks=2631/0, in_queue=2631, util=95.43% 00:43:13.805 nvme0n2: ios=112/0, merge=0/0, ticks=4000/0, in_queue=4000, util=98.48% 00:43:13.805 nvme0n3: ios=78/0, merge=0/0, ticks=2836/0, in_queue=2836, util=96.55% 00:43:13.805 nvme0n4: ios=5454/0, merge=0/0, ticks=2460/0, in_queue=2460, util=96.49% 00:43:14.064 03:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:14.064 03:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:43:14.064 03:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:14.064 03:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:43:14.322 03:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:14.322 03:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:43:14.582 03:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:14.582 03:39:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:43:14.841 03:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:43:14.841 03:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2184674 00:43:14.841 03:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:43:14.841 03:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:14.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:14.841 03:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:14.841 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:43:14.841 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:43:14.841 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:14.841 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:43:14.841 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:14.841 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:43:14.841 03:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:43:14.841 03:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:43:14.841 nvmf hotplug test: fio failed as expected 00:43:14.841 03:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:15.100 03:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:43:15.101 rmmod nvme_tcp 00:43:15.101 rmmod nvme_fabrics 00:43:15.101 rmmod nvme_keyring 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2181938 ']' 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2181938 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 2181938 ']' 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 2181938 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2181938 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2181938' 00:43:15.101 killing process with pid 2181938 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 2181938 00:43:15.101 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 2181938 00:43:15.360 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:43:15.360 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:43:15.360 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:43:15.360 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:15.360 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:43:15.360 03:39:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:15.360 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:15.360 03:39:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:17.898 03:39:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:43:17.898 00:43:17.898 real 0m26.684s 00:43:17.898 user 1m46.364s 00:43:17.898 sys 0m7.960s 00:43:17.898 03:39:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:17.898 03:39:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:17.898 ************************************ 00:43:17.898 END TEST nvmf_fio_target 00:43:17.898 ************************************ 00:43:17.898 03:39:58 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:43:17.898 03:39:58 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:43:17.898 03:39:58 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:17.898 03:39:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:17.898 ************************************ 00:43:17.898 START TEST nvmf_bdevio 00:43:17.898 ************************************ 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:43:17.898 * Looking for test storage... 00:43:17.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:17.898 03:39:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:17.899 03:39:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:43:17.899 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:43:17.899 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:17.899 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:43:17.899 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:43:17.899 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:43:17.899 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:17.899 03:39:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:17.899 03:39:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:17.899 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:43:17.899 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:43:17.899 03:39:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:43:17.899 03:39:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:43:23.177 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:43:23.178 Found 0000:86:00.0 (0x8086 - 0x159b) 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:43:23.178 Found 0000:86:00.1 (0x8086 - 0x159b) 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:43:23.178 Found net devices under 0000:86:00.0: cvl_0_0 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:43:23.178 Found net devices under 0000:86:00.1: cvl_0_1 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:43:23.178 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:43:23.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:23.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:43:23.437 00:43:23.437 --- 10.0.0.2 ping statistics --- 00:43:23.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:23.437 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:23.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:23.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:43:23.437 00:43:23.437 --- 10.0.0.1 ping statistics --- 00:43:23.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:23.437 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2189520 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2189520 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 2189520 ']' 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:23.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:23.437 03:40:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:23.697 [2024-06-11 03:40:04.879537] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:43:23.697 [2024-06-11 03:40:04.879585] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:23.697 EAL: No free 2048 kB hugepages reported on node 1 00:43:23.697 [2024-06-11 03:40:04.943631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:23.697 [2024-06-11 03:40:04.985870] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:23.697 [2024-06-11 03:40:04.985910] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:23.697 [2024-06-11 03:40:04.985916] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:23.697 [2024-06-11 03:40:04.985922] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:23.697 [2024-06-11 03:40:04.985927] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:23.697 [2024-06-11 03:40:04.986054] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:43:23.697 [2024-06-11 03:40:04.986532] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:43:23.697 [2024-06-11 03:40:04.986619] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:43:23.697 [2024-06-11 03:40:04.986620] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:24.633 [2024-06-11 03:40:05.737014] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:24.633 Malloc0 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:24.633 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:24.634 03:40:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:24.634 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:24.634 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:24.634 [2024-06-11 03:40:05.788165] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:24.634 03:40:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:24.634 03:40:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:43:24.634 03:40:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:43:24.634 03:40:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:43:24.634 03:40:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:43:24.634 03:40:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:43:24.634 03:40:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:43:24.634 { 00:43:24.634 "params": { 00:43:24.634 "name": "Nvme$subsystem", 00:43:24.634 "trtype": "$TEST_TRANSPORT", 00:43:24.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:24.634 "adrfam": "ipv4", 00:43:24.634 "trsvcid": "$NVMF_PORT", 00:43:24.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:24.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:24.634 "hdgst": ${hdgst:-false}, 00:43:24.634 "ddgst": ${ddgst:-false} 00:43:24.634 }, 00:43:24.634 "method": "bdev_nvme_attach_controller" 00:43:24.634 } 00:43:24.634 EOF 00:43:24.634 )") 00:43:24.634 03:40:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:43:24.634 03:40:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:43:24.634 03:40:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:43:24.634 03:40:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:43:24.634 "params": { 00:43:24.634 "name": "Nvme1", 00:43:24.634 "trtype": "tcp", 00:43:24.634 "traddr": "10.0.0.2", 00:43:24.634 "adrfam": "ipv4", 00:43:24.634 "trsvcid": "4420", 00:43:24.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:24.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:24.634 "hdgst": false, 00:43:24.634 "ddgst": false 00:43:24.634 }, 00:43:24.634 "method": "bdev_nvme_attach_controller" 00:43:24.634 }' 00:43:24.634 [2024-06-11 03:40:05.836964] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:43:24.634 [2024-06-11 03:40:05.837004] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189756 ] 00:43:24.634 EAL: No free 2048 kB hugepages reported on node 1 00:43:24.634 [2024-06-11 03:40:05.896433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:24.634 [2024-06-11 03:40:05.938348] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:43:24.634 [2024-06-11 03:40:05.938445] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:43:24.634 [2024-06-11 03:40:05.938446] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:43:24.893 I/O targets: 00:43:24.893 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:43:24.893 00:43:24.893 00:43:24.893 CUnit - A unit testing framework for C - Version 2.1-3 00:43:24.893 http://cunit.sourceforge.net/ 00:43:24.893 00:43:24.893 00:43:24.893 Suite: bdevio tests on: Nvme1n1 00:43:24.893 Test: blockdev write read block ...passed 00:43:25.152 Test: blockdev write zeroes read block ...passed 00:43:25.152 Test: blockdev write zeroes read no split ...passed 00:43:25.152 Test: blockdev write zeroes read split ...passed 00:43:25.152 Test: blockdev write zeroes read split partial ...passed 00:43:25.152 Test: blockdev reset ...[2024-06-11 03:40:06.408796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:43:25.152 [2024-06-11 03:40:06.408854] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1431300 (9): Bad file descriptor 00:43:25.152 [2024-06-11 03:40:06.506419] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:43:25.152 passed 00:43:25.152 Test: blockdev write read 8 blocks ...passed 00:43:25.152 Test: blockdev write read size > 128k ...passed 00:43:25.152 Test: blockdev write read invalid size ...passed 00:43:25.152 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:25.152 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:25.152 Test: blockdev write read max offset ...passed 00:43:25.411 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:25.411 Test: blockdev writev readv 8 blocks ...passed 00:43:25.411 Test: blockdev writev readv 30 x 1block ...passed 00:43:25.411 Test: blockdev writev readv block ...passed 00:43:25.411 Test: blockdev writev readv size > 128k ...passed 00:43:25.411 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:25.411 Test: blockdev comparev and writev ...[2024-06-11 03:40:06.721095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:25.411 [2024-06-11 03:40:06.721121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:25.411 [2024-06-11 03:40:06.721134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:25.411 [2024-06-11 03:40:06.721142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:25.411 [2024-06-11 03:40:06.721398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:25.411 [2024-06-11 03:40:06.721408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:25.411 [2024-06-11 03:40:06.721419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:25.411 [2024-06-11 03:40:06.721426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:25.411 [2024-06-11 03:40:06.721692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:25.411 [2024-06-11 03:40:06.721701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:25.411 [2024-06-11 03:40:06.721711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:25.411 [2024-06-11 03:40:06.721723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:25.411 [2024-06-11 03:40:06.721973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:25.411 [2024-06-11 03:40:06.721983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:25.411 [2024-06-11 03:40:06.721994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:25.411 [2024-06-11 03:40:06.722001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:25.411 passed 00:43:25.411 Test: blockdev nvme passthru rw ...passed 00:43:25.411 Test: blockdev nvme passthru vendor specific ...[2024-06-11 03:40:06.805383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:25.411 [2024-06-11 03:40:06.805398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:25.411 [2024-06-11 03:40:06.805538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:25.411 [2024-06-11 03:40:06.805547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:25.411 [2024-06-11 03:40:06.805671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:25.411 [2024-06-11 03:40:06.805680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:25.411 [2024-06-11 03:40:06.805807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:25.411 [2024-06-11 03:40:06.805816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:25.411 passed 00:43:25.670 Test: blockdev nvme admin passthru ...passed 00:43:25.670 Test: blockdev copy ...passed 00:43:25.670 00:43:25.670 Run Summary: Type Total Ran Passed Failed Inactive 00:43:25.670 suites 1 1 n/a 0 0 00:43:25.670 tests 23 23 23 0 0 00:43:25.670 asserts 152 152 152 0 n/a 00:43:25.670 00:43:25.670 Elapsed time = 1.312 seconds 00:43:25.670 03:40:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:25.670 03:40:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:25.670 03:40:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:25.670 03:40:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:25.670 03:40:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:43:25.670 03:40:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:43:25.670 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:43:25.670 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:43:25.670 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:43:25.670 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:43:25.670 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:43:25.670 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:43:25.670 rmmod nvme_tcp 00:43:25.670 rmmod nvme_fabrics 00:43:25.670 rmmod nvme_keyring 00:43:25.670 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2189520 ']' 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2189520 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 2189520 ']' 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 2189520 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2189520 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2189520' 00:43:25.929 killing process with pid 2189520 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 2189520 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 2189520 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:43:25.929 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:25.930 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:43:25.930 03:40:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:25.930 03:40:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:25.930 03:40:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:28.464 03:40:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:43:28.464 00:43:28.464 real 0m10.623s 00:43:28.464 user 0m13.336s 00:43:28.464 sys 0m4.944s 00:43:28.464 03:40:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:43:28.464 03:40:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:28.464 ************************************ 00:43:28.464 END TEST nvmf_bdevio 00:43:28.464 ************************************ 00:43:28.464 03:40:09 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:43:28.464 03:40:09 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:43:28.464 03:40:09 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:43:28.464 03:40:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:28.464 ************************************ 00:43:28.464 START TEST nvmf_auth_target 00:43:28.464 ************************************ 00:43:28.464 03:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:43:28.464 * Looking for test storage... 00:43:28.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:28.464 03:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:28.464 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:43:28.464 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:28.464 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:28.464 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:43:28.465 03:40:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:43:35.100 Found 0000:86:00.0 (0x8086 - 0x159b) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:43:35.100 Found 0000:86:00.1 (0x8086 - 0x159b) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:43:35.100 Found net devices under 0000:86:00.0: cvl_0_0 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:43:35.100 Found net devices under 0000:86:00.1: cvl_0_1 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:43:35.100 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:43:35.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:35.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:43:35.101 00:43:35.101 --- 10.0.0.2 ping statistics --- 00:43:35.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:35.101 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:35.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:35.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:43:35.101 00:43:35.101 --- 10.0.0.1 ping statistics --- 00:43:35.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:35.101 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2193738 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2193738 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2193738 ']' 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2193830 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=298320638574cccf2fd7b677dbb7a1fe330f402e91ef498f 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.bN1 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 298320638574cccf2fd7b677dbb7a1fe330f402e91ef498f 0 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 298320638574cccf2fd7b677dbb7a1fe330f402e91ef498f 0 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=298320638574cccf2fd7b677dbb7a1fe330f402e91ef498f 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.bN1 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.bN1 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.bN1 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=859ee2292cc89b6645406ac51b3d52ba0699a73624ea97b184b14e1aa90e523e 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Nyw 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 859ee2292cc89b6645406ac51b3d52ba0699a73624ea97b184b14e1aa90e523e 3 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 859ee2292cc89b6645406ac51b3d52ba0699a73624ea97b184b14e1aa90e523e 3 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=859ee2292cc89b6645406ac51b3d52ba0699a73624ea97b184b14e1aa90e523e 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:43:35.101 03:40:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Nyw 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Nyw 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Nyw 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f3d9c75069195b3fcb511216da7c13b4 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.JC3 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f3d9c75069195b3fcb511216da7c13b4 1 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f3d9c75069195b3fcb511216da7c13b4 1 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f3d9c75069195b3fcb511216da7c13b4 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.JC3 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.JC3 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.JC3 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7f0612183d479383a9ecde49c18ec48889edfb26a21dbb1b 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Qe2 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7f0612183d479383a9ecde49c18ec48889edfb26a21dbb1b 2 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7f0612183d479383a9ecde49c18ec48889edfb26a21dbb1b 2 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7f0612183d479383a9ecde49c18ec48889edfb26a21dbb1b 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:43:35.101 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Qe2 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Qe2 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Qe2 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bb7f8b02dd1c404652f7c0451f0d60e541dcfdc6b6e26f78 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.qMk 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bb7f8b02dd1c404652f7c0451f0d60e541dcfdc6b6e26f78 2 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bb7f8b02dd1c404652f7c0451f0d60e541dcfdc6b6e26f78 2 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bb7f8b02dd1c404652f7c0451f0d60e541dcfdc6b6e26f78 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.qMk 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.qMk 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.qMk 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=db06c8e251f7d82726cdfc4d6b1933ad 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.3R8 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key db06c8e251f7d82726cdfc4d6b1933ad 1 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 db06c8e251f7d82726cdfc4d6b1933ad 1 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=db06c8e251f7d82726cdfc4d6b1933ad 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.3R8 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.3R8 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.3R8 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0487e6b6c438430e4a61e0a4afa491cf907c19851ae8db86ce0fadcdbd400bbe 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ZY0 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0487e6b6c438430e4a61e0a4afa491cf907c19851ae8db86ce0fadcdbd400bbe 3 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0487e6b6c438430e4a61e0a4afa491cf907c19851ae8db86ce0fadcdbd400bbe 3 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0487e6b6c438430e4a61e0a4afa491cf907c19851ae8db86ce0fadcdbd400bbe 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ZY0 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ZY0 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.ZY0 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2193738 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2193738 ']' 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:35.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2193830 /var/tmp/host.sock 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2193830 ']' 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:43:35.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:43:35.102 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:35.361 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:43:35.361 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:43:35.361 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:43:35.361 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:35.361 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:35.361 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:35.361 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:43:35.361 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.bN1 00:43:35.361 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:35.361 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:35.361 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:35.361 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.bN1 00:43:35.361 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.bN1 00:43:35.619 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Nyw ]] 00:43:35.619 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Nyw 00:43:35.619 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:35.619 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:35.619 03:40:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:35.619 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Nyw 00:43:35.619 03:40:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Nyw 00:43:35.878 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:43:35.878 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.JC3 00:43:35.878 03:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:35.878 03:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:35.878 03:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:35.878 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.JC3 00:43:35.878 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.JC3 00:43:35.878 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Qe2 ]] 00:43:35.878 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qe2 00:43:35.878 03:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:35.878 03:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:35.878 03:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:35.878 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qe2 00:43:35.878 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qe2 00:43:36.138 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:43:36.138 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.qMk 00:43:36.138 03:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:36.138 03:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:36.138 03:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:36.138 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.qMk 00:43:36.138 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.qMk 00:43:36.395 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.3R8 ]] 00:43:36.395 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3R8 00:43:36.395 03:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:36.395 03:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:36.395 03:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:36.395 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3R8 00:43:36.395 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3R8 00:43:36.653 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:43:36.653 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ZY0 00:43:36.653 03:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:36.653 03:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:36.653 03:40:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:36.653 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ZY0 00:43:36.653 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ZY0 00:43:36.653 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:43:36.653 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:43:36.653 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:43:36.653 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:36.653 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:43:36.653 03:40:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:43:36.911 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:43:36.911 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:36.911 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:43:36.911 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:43:36.911 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:43:36.911 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:36.911 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:36.911 03:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:36.911 03:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:36.911 03:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:36.911 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:36.911 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:37.168 00:43:37.168 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:37.168 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:37.168 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:37.426 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:37.426 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:37.426 03:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:37.426 03:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:37.426 03:40:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:37.426 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:37.426 { 00:43:37.426 "cntlid": 1, 00:43:37.426 "qid": 0, 00:43:37.426 "state": "enabled", 00:43:37.426 "listen_address": { 00:43:37.426 "trtype": "TCP", 00:43:37.426 "adrfam": "IPv4", 00:43:37.426 "traddr": "10.0.0.2", 00:43:37.426 "trsvcid": "4420" 00:43:37.426 }, 00:43:37.426 "peer_address": { 00:43:37.426 "trtype": "TCP", 00:43:37.426 "adrfam": "IPv4", 00:43:37.426 "traddr": "10.0.0.1", 00:43:37.426 "trsvcid": "59422" 00:43:37.426 }, 00:43:37.426 "auth": { 00:43:37.426 "state": "completed", 00:43:37.426 "digest": "sha256", 00:43:37.426 "dhgroup": "null" 00:43:37.426 } 00:43:37.426 } 00:43:37.426 ]' 00:43:37.426 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:37.426 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:37.426 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:37.426 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:43:37.426 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:37.426 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:37.426 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:37.426 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:37.685 03:40:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:38.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:38.253 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:38.512 00:43:38.512 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:38.512 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:38.512 03:40:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:38.771 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:38.771 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:38.771 03:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:38.771 03:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:38.771 03:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:38.771 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:38.771 { 00:43:38.771 "cntlid": 3, 00:43:38.771 "qid": 0, 00:43:38.771 "state": "enabled", 00:43:38.771 "listen_address": { 00:43:38.771 "trtype": "TCP", 00:43:38.771 "adrfam": "IPv4", 00:43:38.771 "traddr": "10.0.0.2", 00:43:38.771 "trsvcid": "4420" 00:43:38.771 }, 00:43:38.771 "peer_address": { 00:43:38.771 "trtype": "TCP", 00:43:38.771 "adrfam": "IPv4", 00:43:38.771 "traddr": "10.0.0.1", 00:43:38.771 "trsvcid": "59446" 00:43:38.771 }, 00:43:38.771 "auth": { 00:43:38.771 "state": "completed", 00:43:38.771 "digest": "sha256", 00:43:38.771 "dhgroup": "null" 00:43:38.771 } 00:43:38.771 } 00:43:38.771 ]' 00:43:38.771 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:38.771 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:38.771 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:38.771 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:43:38.771 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:38.771 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:38.771 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:38.771 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:39.030 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:43:39.597 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:39.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:39.597 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:39.597 03:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:39.597 03:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:39.597 03:40:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:39.597 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:39.597 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:43:39.597 03:40:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:43:39.854 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:43:39.855 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:39.855 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:43:39.855 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:43:39.855 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:43:39.855 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:39.855 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:39.855 03:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:39.855 03:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:39.855 03:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:39.855 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:39.855 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:40.112 00:43:40.112 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:40.112 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:40.112 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:40.371 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:40.371 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:40.371 03:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:40.371 03:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:40.371 03:40:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:40.371 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:40.371 { 00:43:40.371 "cntlid": 5, 00:43:40.371 "qid": 0, 00:43:40.371 "state": "enabled", 00:43:40.371 "listen_address": { 00:43:40.371 "trtype": "TCP", 00:43:40.371 "adrfam": "IPv4", 00:43:40.371 "traddr": "10.0.0.2", 00:43:40.371 "trsvcid": "4420" 00:43:40.371 }, 00:43:40.371 "peer_address": { 00:43:40.371 "trtype": "TCP", 00:43:40.371 "adrfam": "IPv4", 00:43:40.371 "traddr": "10.0.0.1", 00:43:40.371 "trsvcid": "59466" 00:43:40.371 }, 00:43:40.371 "auth": { 00:43:40.371 "state": "completed", 00:43:40.371 "digest": "sha256", 00:43:40.371 "dhgroup": "null" 00:43:40.371 } 00:43:40.371 } 00:43:40.371 ]' 00:43:40.371 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:40.371 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:40.371 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:40.371 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:43:40.371 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:40.371 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:40.371 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:40.371 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:40.628 03:40:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:43:41.195 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:41.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:41.195 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:41.195 03:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:41.195 03:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:41.195 03:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:41.195 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:41.195 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:43:41.195 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:43:41.453 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:43:41.454 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:41.454 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:43:41.454 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:43:41.454 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:43:41.454 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:41.454 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:43:41.454 03:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:41.454 03:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:41.454 03:40:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:41.454 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:41.454 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:41.454 00:43:41.712 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:41.712 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:41.712 03:40:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:41.712 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:41.712 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:41.712 03:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:41.712 03:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:41.712 03:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:41.712 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:41.712 { 00:43:41.712 "cntlid": 7, 00:43:41.712 "qid": 0, 00:43:41.712 "state": "enabled", 00:43:41.712 "listen_address": { 00:43:41.712 "trtype": "TCP", 00:43:41.712 "adrfam": "IPv4", 00:43:41.712 "traddr": "10.0.0.2", 00:43:41.712 "trsvcid": "4420" 00:43:41.712 }, 00:43:41.712 "peer_address": { 00:43:41.712 "trtype": "TCP", 00:43:41.712 "adrfam": "IPv4", 00:43:41.712 "traddr": "10.0.0.1", 00:43:41.712 "trsvcid": "41010" 00:43:41.712 }, 00:43:41.712 "auth": { 00:43:41.712 "state": "completed", 00:43:41.712 "digest": "sha256", 00:43:41.712 "dhgroup": "null" 00:43:41.712 } 00:43:41.712 } 00:43:41.712 ]' 00:43:41.712 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:41.712 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:41.712 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:41.971 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:43:41.971 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:41.971 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:41.972 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:41.972 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:41.972 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:43:42.540 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:42.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:42.540 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:42.540 03:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:42.540 03:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:42.540 03:40:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:42.540 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:43:42.540 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:42.540 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:43:42.540 03:40:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:43:42.798 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:43:42.798 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:42.798 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:43:42.798 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:43:42.798 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:43:42.798 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:42.798 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:42.798 03:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:42.798 03:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:42.798 03:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:42.798 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:42.798 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:43.057 00:43:43.057 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:43.057 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:43.057 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:43.316 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:43.316 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:43.316 03:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:43.316 03:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:43.316 03:40:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:43.316 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:43.316 { 00:43:43.316 "cntlid": 9, 00:43:43.316 "qid": 0, 00:43:43.316 "state": "enabled", 00:43:43.316 "listen_address": { 00:43:43.316 "trtype": "TCP", 00:43:43.316 "adrfam": "IPv4", 00:43:43.316 "traddr": "10.0.0.2", 00:43:43.316 "trsvcid": "4420" 00:43:43.316 }, 00:43:43.316 "peer_address": { 00:43:43.316 "trtype": "TCP", 00:43:43.316 "adrfam": "IPv4", 00:43:43.316 "traddr": "10.0.0.1", 00:43:43.316 "trsvcid": "41042" 00:43:43.316 }, 00:43:43.316 "auth": { 00:43:43.316 "state": "completed", 00:43:43.316 "digest": "sha256", 00:43:43.316 "dhgroup": "ffdhe2048" 00:43:43.316 } 00:43:43.316 } 00:43:43.316 ]' 00:43:43.316 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:43.316 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:43.316 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:43.316 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:43:43.316 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:43.316 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:43.316 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:43.316 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:43.576 03:40:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:44.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:44.144 03:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:44.403 03:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:44.403 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:44.403 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:44.403 00:43:44.403 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:44.403 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:44.403 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:44.662 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:44.662 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:44.662 03:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:44.662 03:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:44.662 03:40:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:44.662 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:44.662 { 00:43:44.662 "cntlid": 11, 00:43:44.662 "qid": 0, 00:43:44.662 "state": "enabled", 00:43:44.662 "listen_address": { 00:43:44.662 "trtype": "TCP", 00:43:44.662 "adrfam": "IPv4", 00:43:44.662 "traddr": "10.0.0.2", 00:43:44.662 "trsvcid": "4420" 00:43:44.662 }, 00:43:44.662 "peer_address": { 00:43:44.662 "trtype": "TCP", 00:43:44.662 "adrfam": "IPv4", 00:43:44.662 "traddr": "10.0.0.1", 00:43:44.662 "trsvcid": "41072" 00:43:44.662 }, 00:43:44.662 "auth": { 00:43:44.662 "state": "completed", 00:43:44.662 "digest": "sha256", 00:43:44.663 "dhgroup": "ffdhe2048" 00:43:44.663 } 00:43:44.663 } 00:43:44.663 ]' 00:43:44.663 03:40:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:44.663 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:44.663 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:44.663 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:43:44.663 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:44.922 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:44.922 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:44.922 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:44.922 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:43:45.490 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:45.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:45.490 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:45.490 03:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:45.490 03:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:45.490 03:40:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:45.490 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:45.490 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:43:45.490 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:43:45.749 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:43:45.749 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:45.749 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:43:45.749 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:43:45.749 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:43:45.749 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:45.749 03:40:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:45.749 03:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:45.749 03:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:45.749 03:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:45.749 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:45.749 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:46.009 00:43:46.009 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:46.009 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:46.009 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:46.009 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:46.009 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:46.009 03:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:46.009 03:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:46.268 03:40:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:46.268 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:46.268 { 00:43:46.268 "cntlid": 13, 00:43:46.268 "qid": 0, 00:43:46.268 "state": "enabled", 00:43:46.268 "listen_address": { 00:43:46.268 "trtype": "TCP", 00:43:46.268 "adrfam": "IPv4", 00:43:46.268 "traddr": "10.0.0.2", 00:43:46.268 "trsvcid": "4420" 00:43:46.268 }, 00:43:46.268 "peer_address": { 00:43:46.268 "trtype": "TCP", 00:43:46.268 "adrfam": "IPv4", 00:43:46.268 "traddr": "10.0.0.1", 00:43:46.268 "trsvcid": "41098" 00:43:46.268 }, 00:43:46.268 "auth": { 00:43:46.268 "state": "completed", 00:43:46.268 "digest": "sha256", 00:43:46.268 "dhgroup": "ffdhe2048" 00:43:46.268 } 00:43:46.268 } 00:43:46.268 ]' 00:43:46.268 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:46.268 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:46.268 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:46.268 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:43:46.268 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:46.268 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:46.268 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:46.268 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:46.527 03:40:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:43:47.096 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:47.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:47.096 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:47.096 03:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:47.096 03:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:47.096 03:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:47.096 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:47.096 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:43:47.097 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:43:47.097 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:43:47.097 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:47.097 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:43:47.097 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:43:47.097 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:43:47.097 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:47.097 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:43:47.097 03:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:47.097 03:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:47.097 03:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:47.097 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:47.097 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:47.356 00:43:47.356 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:47.356 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:47.356 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:47.615 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:47.615 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:47.615 03:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:47.615 03:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:47.615 03:40:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:47.615 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:47.615 { 00:43:47.615 "cntlid": 15, 00:43:47.615 "qid": 0, 00:43:47.615 "state": "enabled", 00:43:47.615 "listen_address": { 00:43:47.615 "trtype": "TCP", 00:43:47.615 "adrfam": "IPv4", 00:43:47.615 "traddr": "10.0.0.2", 00:43:47.615 "trsvcid": "4420" 00:43:47.615 }, 00:43:47.615 "peer_address": { 00:43:47.615 "trtype": "TCP", 00:43:47.615 "adrfam": "IPv4", 00:43:47.615 "traddr": "10.0.0.1", 00:43:47.615 "trsvcid": "41128" 00:43:47.615 }, 00:43:47.615 "auth": { 00:43:47.615 "state": "completed", 00:43:47.615 "digest": "sha256", 00:43:47.615 "dhgroup": "ffdhe2048" 00:43:47.615 } 00:43:47.615 } 00:43:47.615 ]' 00:43:47.615 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:47.615 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:47.615 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:47.615 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:43:47.615 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:47.615 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:47.615 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:47.615 03:40:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:47.875 03:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:43:48.443 03:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:48.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:48.443 03:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:48.443 03:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:48.443 03:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:48.443 03:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:48.443 03:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:43:48.443 03:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:48.443 03:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:43:48.443 03:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:43:48.703 03:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:43:48.703 03:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:48.703 03:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:43:48.703 03:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:43:48.703 03:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:43:48.703 03:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:48.703 03:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:48.703 03:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:48.703 03:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:48.703 03:40:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:48.703 03:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:48.703 03:40:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:48.962 00:43:48.962 03:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:48.962 03:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:48.962 03:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:48.962 03:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:48.962 03:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:48.962 03:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:48.962 03:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:48.962 03:40:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:48.962 03:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:48.962 { 00:43:48.962 "cntlid": 17, 00:43:48.962 "qid": 0, 00:43:48.962 "state": "enabled", 00:43:48.962 "listen_address": { 00:43:48.962 "trtype": "TCP", 00:43:48.962 "adrfam": "IPv4", 00:43:48.962 "traddr": "10.0.0.2", 00:43:48.962 "trsvcid": "4420" 00:43:48.962 }, 00:43:48.962 "peer_address": { 00:43:48.962 "trtype": "TCP", 00:43:48.962 "adrfam": "IPv4", 00:43:48.962 "traddr": "10.0.0.1", 00:43:48.962 "trsvcid": "41150" 00:43:48.962 }, 00:43:48.962 "auth": { 00:43:48.962 "state": "completed", 00:43:48.962 "digest": "sha256", 00:43:48.962 "dhgroup": "ffdhe3072" 00:43:48.962 } 00:43:48.962 } 00:43:48.962 ]' 00:43:48.962 03:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:48.962 03:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:48.962 03:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:49.221 03:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:43:49.221 03:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:49.221 03:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:49.221 03:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:49.221 03:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:49.221 03:40:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:43:49.853 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:49.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:49.853 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:49.853 03:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:49.853 03:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:49.853 03:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:49.853 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:49.853 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:43:49.853 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:43:50.112 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:43:50.112 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:50.112 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:43:50.112 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:43:50.112 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:43:50.112 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:50.112 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:50.112 03:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:50.112 03:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:50.112 03:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:50.112 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:50.112 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:50.371 00:43:50.371 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:50.371 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:50.371 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:50.631 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:50.631 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:50.631 03:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:50.631 03:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:50.631 03:40:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:50.631 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:50.631 { 00:43:50.631 "cntlid": 19, 00:43:50.631 "qid": 0, 00:43:50.631 "state": "enabled", 00:43:50.631 "listen_address": { 00:43:50.631 "trtype": "TCP", 00:43:50.631 "adrfam": "IPv4", 00:43:50.631 "traddr": "10.0.0.2", 00:43:50.631 "trsvcid": "4420" 00:43:50.631 }, 00:43:50.631 "peer_address": { 00:43:50.631 "trtype": "TCP", 00:43:50.631 "adrfam": "IPv4", 00:43:50.631 "traddr": "10.0.0.1", 00:43:50.631 "trsvcid": "41158" 00:43:50.631 }, 00:43:50.631 "auth": { 00:43:50.631 "state": "completed", 00:43:50.631 "digest": "sha256", 00:43:50.631 "dhgroup": "ffdhe3072" 00:43:50.631 } 00:43:50.631 } 00:43:50.631 ]' 00:43:50.631 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:50.631 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:50.631 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:50.631 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:43:50.631 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:50.631 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:50.631 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:50.631 03:40:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:50.890 03:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:51.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:51.459 03:40:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:51.717 00:43:51.718 03:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:51.718 03:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:51.718 03:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:51.976 03:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:51.976 03:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:51.976 03:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:51.976 03:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:51.976 03:40:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:51.976 03:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:51.976 { 00:43:51.976 "cntlid": 21, 00:43:51.976 "qid": 0, 00:43:51.976 "state": "enabled", 00:43:51.976 "listen_address": { 00:43:51.976 "trtype": "TCP", 00:43:51.976 "adrfam": "IPv4", 00:43:51.976 "traddr": "10.0.0.2", 00:43:51.976 "trsvcid": "4420" 00:43:51.976 }, 00:43:51.976 "peer_address": { 00:43:51.976 "trtype": "TCP", 00:43:51.976 "adrfam": "IPv4", 00:43:51.976 "traddr": "10.0.0.1", 00:43:51.976 "trsvcid": "56524" 00:43:51.976 }, 00:43:51.976 "auth": { 00:43:51.976 "state": "completed", 00:43:51.976 "digest": "sha256", 00:43:51.976 "dhgroup": "ffdhe3072" 00:43:51.976 } 00:43:51.976 } 00:43:51.976 ]' 00:43:51.976 03:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:51.976 03:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:51.976 03:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:51.976 03:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:43:51.976 03:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:51.976 03:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:51.976 03:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:51.976 03:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:52.235 03:40:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:43:52.803 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:52.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:52.803 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:52.803 03:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:52.803 03:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:52.803 03:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:52.803 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:52.803 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:43:52.803 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:43:53.062 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:43:53.062 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:53.062 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:43:53.062 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:43:53.062 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:43:53.062 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:53.062 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:43:53.062 03:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:53.062 03:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:53.062 03:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:53.062 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:53.062 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:53.321 00:43:53.321 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:53.321 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:53.321 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:53.321 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:53.321 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:53.322 03:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:53.322 03:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:53.322 03:40:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:53.322 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:53.322 { 00:43:53.322 "cntlid": 23, 00:43:53.322 "qid": 0, 00:43:53.322 "state": "enabled", 00:43:53.322 "listen_address": { 00:43:53.322 "trtype": "TCP", 00:43:53.322 "adrfam": "IPv4", 00:43:53.322 "traddr": "10.0.0.2", 00:43:53.322 "trsvcid": "4420" 00:43:53.322 }, 00:43:53.322 "peer_address": { 00:43:53.322 "trtype": "TCP", 00:43:53.322 "adrfam": "IPv4", 00:43:53.322 "traddr": "10.0.0.1", 00:43:53.322 "trsvcid": "56560" 00:43:53.322 }, 00:43:53.322 "auth": { 00:43:53.322 "state": "completed", 00:43:53.322 "digest": "sha256", 00:43:53.322 "dhgroup": "ffdhe3072" 00:43:53.322 } 00:43:53.322 } 00:43:53.322 ]' 00:43:53.322 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:53.581 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:53.581 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:53.581 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:43:53.581 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:53.581 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:53.581 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:53.581 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:53.581 03:40:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:43:54.149 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:54.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:54.149 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:54.149 03:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:54.149 03:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:54.149 03:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:54.149 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:43:54.149 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:54.149 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:43:54.149 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:43:54.408 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:43:54.408 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:54.408 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:43:54.408 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:43:54.408 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:43:54.408 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:54.408 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:54.408 03:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:54.408 03:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:54.408 03:40:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:54.408 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:54.408 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:54.667 00:43:54.667 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:54.667 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:54.667 03:40:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:54.925 03:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:54.925 03:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:54.925 03:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:54.925 03:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:54.925 03:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:54.925 03:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:54.925 { 00:43:54.925 "cntlid": 25, 00:43:54.925 "qid": 0, 00:43:54.925 "state": "enabled", 00:43:54.925 "listen_address": { 00:43:54.925 "trtype": "TCP", 00:43:54.925 "adrfam": "IPv4", 00:43:54.925 "traddr": "10.0.0.2", 00:43:54.925 "trsvcid": "4420" 00:43:54.926 }, 00:43:54.926 "peer_address": { 00:43:54.926 "trtype": "TCP", 00:43:54.926 "adrfam": "IPv4", 00:43:54.926 "traddr": "10.0.0.1", 00:43:54.926 "trsvcid": "56582" 00:43:54.926 }, 00:43:54.926 "auth": { 00:43:54.926 "state": "completed", 00:43:54.926 "digest": "sha256", 00:43:54.926 "dhgroup": "ffdhe4096" 00:43:54.926 } 00:43:54.926 } 00:43:54.926 ]' 00:43:54.926 03:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:54.926 03:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:54.926 03:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:54.926 03:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:43:54.926 03:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:54.926 03:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:54.926 03:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:54.926 03:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:55.184 03:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:43:55.753 03:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:55.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:55.753 03:40:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:55.753 03:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:55.753 03:40:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:55.753 03:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:55.753 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:55.753 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:43:55.753 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:43:56.012 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:43:56.012 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:56.012 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:43:56.012 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:43:56.012 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:43:56.012 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:56.012 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:56.012 03:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:56.012 03:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:56.012 03:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:56.012 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:56.012 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:56.012 00:43:56.272 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:56.272 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:56.272 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:56.272 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:56.272 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:56.272 03:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:56.272 03:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:56.272 03:40:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:56.272 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:56.272 { 00:43:56.272 "cntlid": 27, 00:43:56.272 "qid": 0, 00:43:56.272 "state": "enabled", 00:43:56.272 "listen_address": { 00:43:56.272 "trtype": "TCP", 00:43:56.272 "adrfam": "IPv4", 00:43:56.272 "traddr": "10.0.0.2", 00:43:56.272 "trsvcid": "4420" 00:43:56.272 }, 00:43:56.272 "peer_address": { 00:43:56.272 "trtype": "TCP", 00:43:56.272 "adrfam": "IPv4", 00:43:56.272 "traddr": "10.0.0.1", 00:43:56.272 "trsvcid": "56616" 00:43:56.272 }, 00:43:56.272 "auth": { 00:43:56.272 "state": "completed", 00:43:56.272 "digest": "sha256", 00:43:56.272 "dhgroup": "ffdhe4096" 00:43:56.272 } 00:43:56.272 } 00:43:56.272 ]' 00:43:56.272 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:56.272 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:56.272 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:56.531 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:43:56.531 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:56.531 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:56.531 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:56.531 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:56.531 03:40:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:43:57.099 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:57.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:57.099 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:57.099 03:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:57.099 03:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:57.099 03:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:57.099 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:57.099 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:43:57.099 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:43:57.358 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:43:57.358 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:57.358 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:43:57.358 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:43:57.358 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:43:57.358 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:57.358 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:57.358 03:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:57.358 03:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:57.358 03:40:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:57.358 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:57.358 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:57.617 00:43:57.617 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:57.617 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:57.617 03:40:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:57.877 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:57.877 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:57.877 03:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:57.877 03:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:57.877 03:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:57.877 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:57.877 { 00:43:57.877 "cntlid": 29, 00:43:57.877 "qid": 0, 00:43:57.877 "state": "enabled", 00:43:57.877 "listen_address": { 00:43:57.877 "trtype": "TCP", 00:43:57.877 "adrfam": "IPv4", 00:43:57.877 "traddr": "10.0.0.2", 00:43:57.877 "trsvcid": "4420" 00:43:57.877 }, 00:43:57.877 "peer_address": { 00:43:57.877 "trtype": "TCP", 00:43:57.877 "adrfam": "IPv4", 00:43:57.877 "traddr": "10.0.0.1", 00:43:57.877 "trsvcid": "56646" 00:43:57.877 }, 00:43:57.877 "auth": { 00:43:57.877 "state": "completed", 00:43:57.877 "digest": "sha256", 00:43:57.877 "dhgroup": "ffdhe4096" 00:43:57.877 } 00:43:57.877 } 00:43:57.877 ]' 00:43:57.877 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:57.877 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:57.877 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:57.877 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:43:57.877 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:57.877 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:57.877 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:57.877 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:58.136 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:43:58.704 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:58.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:58.704 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:43:58.704 03:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:58.704 03:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:58.704 03:40:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:58.704 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:58.704 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:43:58.704 03:40:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:43:58.704 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:43:58.704 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:58.704 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:43:58.704 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:43:58.704 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:43:58.704 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:58.704 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:43:58.704 03:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:58.704 03:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:58.704 03:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:58.704 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:58.704 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:58.963 00:43:58.963 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:58.963 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:58.963 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:59.222 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:59.222 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:59.222 03:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:43:59.222 03:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:59.222 03:40:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:43:59.222 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:59.222 { 00:43:59.222 "cntlid": 31, 00:43:59.222 "qid": 0, 00:43:59.222 "state": "enabled", 00:43:59.222 "listen_address": { 00:43:59.222 "trtype": "TCP", 00:43:59.222 "adrfam": "IPv4", 00:43:59.222 "traddr": "10.0.0.2", 00:43:59.222 "trsvcid": "4420" 00:43:59.222 }, 00:43:59.222 "peer_address": { 00:43:59.222 "trtype": "TCP", 00:43:59.222 "adrfam": "IPv4", 00:43:59.222 "traddr": "10.0.0.1", 00:43:59.222 "trsvcid": "56688" 00:43:59.222 }, 00:43:59.222 "auth": { 00:43:59.222 "state": "completed", 00:43:59.222 "digest": "sha256", 00:43:59.222 "dhgroup": "ffdhe4096" 00:43:59.222 } 00:43:59.222 } 00:43:59.222 ]' 00:43:59.222 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:59.222 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:43:59.222 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:59.222 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:43:59.222 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:59.481 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:59.481 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:59.481 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:59.481 03:40:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:44:00.049 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:00.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:00.049 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:00.049 03:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:00.049 03:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:00.049 03:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:00.049 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:44:00.049 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:00.049 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:44:00.049 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:44:00.308 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:44:00.308 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:00.308 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:44:00.308 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:44:00.308 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:44:00.308 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:00.308 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:00.308 03:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:00.308 03:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:00.308 03:40:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:00.308 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:00.308 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:00.567 00:44:00.567 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:00.567 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:00.567 03:40:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:00.826 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:00.826 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:00.826 03:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:00.826 03:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:00.826 03:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:00.826 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:00.826 { 00:44:00.826 "cntlid": 33, 00:44:00.826 "qid": 0, 00:44:00.826 "state": "enabled", 00:44:00.826 "listen_address": { 00:44:00.826 "trtype": "TCP", 00:44:00.826 "adrfam": "IPv4", 00:44:00.826 "traddr": "10.0.0.2", 00:44:00.826 "trsvcid": "4420" 00:44:00.826 }, 00:44:00.826 "peer_address": { 00:44:00.826 "trtype": "TCP", 00:44:00.826 "adrfam": "IPv4", 00:44:00.826 "traddr": "10.0.0.1", 00:44:00.826 "trsvcid": "56706" 00:44:00.826 }, 00:44:00.826 "auth": { 00:44:00.826 "state": "completed", 00:44:00.826 "digest": "sha256", 00:44:00.826 "dhgroup": "ffdhe6144" 00:44:00.826 } 00:44:00.826 } 00:44:00.826 ]' 00:44:00.826 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:00.826 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:00.826 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:00.826 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:00.826 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:00.826 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:00.826 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:00.826 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:01.085 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:44:01.651 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:01.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:01.651 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:01.651 03:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:01.651 03:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:01.651 03:40:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:01.651 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:01.651 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:44:01.651 03:40:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:44:01.910 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:44:01.910 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:01.910 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:44:01.910 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:44:01.910 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:44:01.910 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:01.910 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:01.910 03:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:01.910 03:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:01.910 03:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:01.910 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:01.910 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:02.170 00:44:02.170 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:02.170 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:02.170 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:02.429 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:02.429 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:02.429 03:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:02.429 03:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:02.429 03:40:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:02.429 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:02.429 { 00:44:02.429 "cntlid": 35, 00:44:02.429 "qid": 0, 00:44:02.429 "state": "enabled", 00:44:02.429 "listen_address": { 00:44:02.429 "trtype": "TCP", 00:44:02.429 "adrfam": "IPv4", 00:44:02.429 "traddr": "10.0.0.2", 00:44:02.429 "trsvcid": "4420" 00:44:02.429 }, 00:44:02.429 "peer_address": { 00:44:02.429 "trtype": "TCP", 00:44:02.429 "adrfam": "IPv4", 00:44:02.429 "traddr": "10.0.0.1", 00:44:02.429 "trsvcid": "40318" 00:44:02.429 }, 00:44:02.429 "auth": { 00:44:02.429 "state": "completed", 00:44:02.429 "digest": "sha256", 00:44:02.429 "dhgroup": "ffdhe6144" 00:44:02.429 } 00:44:02.429 } 00:44:02.429 ]' 00:44:02.429 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:02.429 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:02.429 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:02.429 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:02.429 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:02.429 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:02.429 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:02.429 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:02.688 03:40:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:03.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:03.256 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:03.824 00:44:03.824 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:03.824 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:03.824 03:40:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:03.824 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:03.824 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:03.824 03:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:03.824 03:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:03.824 03:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:03.824 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:03.824 { 00:44:03.824 "cntlid": 37, 00:44:03.824 "qid": 0, 00:44:03.824 "state": "enabled", 00:44:03.824 "listen_address": { 00:44:03.824 "trtype": "TCP", 00:44:03.824 "adrfam": "IPv4", 00:44:03.824 "traddr": "10.0.0.2", 00:44:03.824 "trsvcid": "4420" 00:44:03.824 }, 00:44:03.824 "peer_address": { 00:44:03.824 "trtype": "TCP", 00:44:03.824 "adrfam": "IPv4", 00:44:03.824 "traddr": "10.0.0.1", 00:44:03.824 "trsvcid": "40344" 00:44:03.824 }, 00:44:03.824 "auth": { 00:44:03.824 "state": "completed", 00:44:03.824 "digest": "sha256", 00:44:03.824 "dhgroup": "ffdhe6144" 00:44:03.824 } 00:44:03.824 } 00:44:03.824 ]' 00:44:03.824 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:03.824 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:03.824 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:03.824 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:03.824 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:04.083 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:04.083 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:04.083 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:04.083 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:44:04.652 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:04.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:04.652 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:04.652 03:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:04.652 03:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:04.652 03:40:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:04.652 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:04.652 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:44:04.652 03:40:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:44:04.911 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:44:04.911 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:04.911 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:44:04.911 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:44:04.911 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:44:04.911 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:04.911 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:44:04.912 03:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:04.912 03:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:04.912 03:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:04.912 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:04.912 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:05.171 00:44:05.171 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:05.171 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:05.171 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:05.430 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:05.430 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:05.430 03:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:05.430 03:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:05.430 03:40:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:05.430 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:05.430 { 00:44:05.430 "cntlid": 39, 00:44:05.430 "qid": 0, 00:44:05.430 "state": "enabled", 00:44:05.430 "listen_address": { 00:44:05.430 "trtype": "TCP", 00:44:05.430 "adrfam": "IPv4", 00:44:05.430 "traddr": "10.0.0.2", 00:44:05.430 "trsvcid": "4420" 00:44:05.430 }, 00:44:05.430 "peer_address": { 00:44:05.430 "trtype": "TCP", 00:44:05.430 "adrfam": "IPv4", 00:44:05.430 "traddr": "10.0.0.1", 00:44:05.430 "trsvcid": "40356" 00:44:05.430 }, 00:44:05.430 "auth": { 00:44:05.430 "state": "completed", 00:44:05.430 "digest": "sha256", 00:44:05.430 "dhgroup": "ffdhe6144" 00:44:05.430 } 00:44:05.430 } 00:44:05.430 ]' 00:44:05.430 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:05.430 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:05.430 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:05.430 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:05.430 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:05.430 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:05.430 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:05.430 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:05.689 03:40:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:06.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:06.257 03:40:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:06.871 00:44:06.871 03:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:06.871 03:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:06.871 03:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:06.871 03:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:06.871 03:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:06.871 03:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:06.871 03:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:07.130 03:40:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:07.130 03:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:07.130 { 00:44:07.130 "cntlid": 41, 00:44:07.130 "qid": 0, 00:44:07.130 "state": "enabled", 00:44:07.130 "listen_address": { 00:44:07.130 "trtype": "TCP", 00:44:07.130 "adrfam": "IPv4", 00:44:07.130 "traddr": "10.0.0.2", 00:44:07.130 "trsvcid": "4420" 00:44:07.130 }, 00:44:07.130 "peer_address": { 00:44:07.130 "trtype": "TCP", 00:44:07.130 "adrfam": "IPv4", 00:44:07.131 "traddr": "10.0.0.1", 00:44:07.131 "trsvcid": "40382" 00:44:07.131 }, 00:44:07.131 "auth": { 00:44:07.131 "state": "completed", 00:44:07.131 "digest": "sha256", 00:44:07.131 "dhgroup": "ffdhe8192" 00:44:07.131 } 00:44:07.131 } 00:44:07.131 ]' 00:44:07.131 03:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:07.131 03:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:07.131 03:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:07.131 03:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:07.131 03:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:07.131 03:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:07.131 03:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:07.131 03:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:07.389 03:40:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:07.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:07.956 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:08.523 00:44:08.523 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:08.523 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:08.523 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:08.781 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:08.781 03:40:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:08.781 03:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:08.781 03:40:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:08.781 03:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:08.781 03:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:08.781 { 00:44:08.781 "cntlid": 43, 00:44:08.781 "qid": 0, 00:44:08.781 "state": "enabled", 00:44:08.781 "listen_address": { 00:44:08.781 "trtype": "TCP", 00:44:08.781 "adrfam": "IPv4", 00:44:08.781 "traddr": "10.0.0.2", 00:44:08.781 "trsvcid": "4420" 00:44:08.781 }, 00:44:08.781 "peer_address": { 00:44:08.781 "trtype": "TCP", 00:44:08.781 "adrfam": "IPv4", 00:44:08.781 "traddr": "10.0.0.1", 00:44:08.781 "trsvcid": "40408" 00:44:08.781 }, 00:44:08.781 "auth": { 00:44:08.781 "state": "completed", 00:44:08.781 "digest": "sha256", 00:44:08.781 "dhgroup": "ffdhe8192" 00:44:08.781 } 00:44:08.781 } 00:44:08.781 ]' 00:44:08.781 03:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:08.781 03:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:08.781 03:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:08.781 03:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:08.781 03:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:08.781 03:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:08.781 03:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:08.781 03:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:09.039 03:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:44:09.606 03:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:09.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:09.606 03:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:09.606 03:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:09.606 03:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:09.606 03:40:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:09.606 03:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:09.606 03:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:44:09.606 03:40:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:44:09.865 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:44:09.865 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:09.865 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:44:09.865 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:44:09.865 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:44:09.865 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:09.865 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:09.865 03:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:09.865 03:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:09.865 03:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:09.865 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:09.865 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:10.123 00:44:10.382 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:10.382 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:10.382 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:10.382 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:10.382 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:10.382 03:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:10.382 03:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:10.382 03:40:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:10.382 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:10.382 { 00:44:10.382 "cntlid": 45, 00:44:10.382 "qid": 0, 00:44:10.382 "state": "enabled", 00:44:10.382 "listen_address": { 00:44:10.382 "trtype": "TCP", 00:44:10.382 "adrfam": "IPv4", 00:44:10.382 "traddr": "10.0.0.2", 00:44:10.382 "trsvcid": "4420" 00:44:10.382 }, 00:44:10.382 "peer_address": { 00:44:10.382 "trtype": "TCP", 00:44:10.382 "adrfam": "IPv4", 00:44:10.382 "traddr": "10.0.0.1", 00:44:10.382 "trsvcid": "40432" 00:44:10.382 }, 00:44:10.382 "auth": { 00:44:10.382 "state": "completed", 00:44:10.382 "digest": "sha256", 00:44:10.382 "dhgroup": "ffdhe8192" 00:44:10.382 } 00:44:10.382 } 00:44:10.382 ]' 00:44:10.382 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:10.382 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:10.382 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:10.640 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:10.640 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:10.640 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:10.641 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:10.641 03:40:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:10.901 03:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:44:11.487 03:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:11.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:11.487 03:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:11.487 03:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:11.487 03:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:11.487 03:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:11.488 03:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:11.488 03:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:44:11.488 03:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:44:11.488 03:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:44:11.488 03:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:11.488 03:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:44:11.488 03:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:44:11.488 03:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:44:11.488 03:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:11.488 03:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:44:11.488 03:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:11.488 03:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:11.488 03:40:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:11.488 03:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:11.488 03:40:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:12.055 00:44:12.055 03:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:12.055 03:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:12.055 03:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:12.055 03:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:12.055 03:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:12.055 03:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:12.055 03:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:12.314 03:40:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:12.314 03:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:12.314 { 00:44:12.314 "cntlid": 47, 00:44:12.314 "qid": 0, 00:44:12.314 "state": "enabled", 00:44:12.314 "listen_address": { 00:44:12.314 "trtype": "TCP", 00:44:12.314 "adrfam": "IPv4", 00:44:12.314 "traddr": "10.0.0.2", 00:44:12.314 "trsvcid": "4420" 00:44:12.314 }, 00:44:12.314 "peer_address": { 00:44:12.314 "trtype": "TCP", 00:44:12.314 "adrfam": "IPv4", 00:44:12.314 "traddr": "10.0.0.1", 00:44:12.314 "trsvcid": "35490" 00:44:12.314 }, 00:44:12.314 "auth": { 00:44:12.314 "state": "completed", 00:44:12.314 "digest": "sha256", 00:44:12.314 "dhgroup": "ffdhe8192" 00:44:12.314 } 00:44:12.314 } 00:44:12.314 ]' 00:44:12.314 03:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:12.314 03:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:44:12.314 03:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:12.314 03:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:12.314 03:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:12.314 03:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:12.314 03:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:12.314 03:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:12.573 03:40:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:44:13.141 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:13.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:13.141 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:13.141 03:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:13.141 03:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:13.141 03:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:13.141 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:44:13.141 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:44:13.141 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:13.142 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:44:13.142 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:44:13.142 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:44:13.142 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:13.142 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:13.142 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:44:13.142 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:44:13.142 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:13.142 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:13.142 03:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:13.142 03:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:13.142 03:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:13.142 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:13.142 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:13.401 00:44:13.401 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:13.401 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:13.401 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:13.660 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:13.660 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:13.660 03:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:13.660 03:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:13.660 03:40:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:13.660 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:13.660 { 00:44:13.660 "cntlid": 49, 00:44:13.660 "qid": 0, 00:44:13.660 "state": "enabled", 00:44:13.660 "listen_address": { 00:44:13.660 "trtype": "TCP", 00:44:13.660 "adrfam": "IPv4", 00:44:13.660 "traddr": "10.0.0.2", 00:44:13.660 "trsvcid": "4420" 00:44:13.660 }, 00:44:13.660 "peer_address": { 00:44:13.660 "trtype": "TCP", 00:44:13.660 "adrfam": "IPv4", 00:44:13.660 "traddr": "10.0.0.1", 00:44:13.660 "trsvcid": "35512" 00:44:13.660 }, 00:44:13.660 "auth": { 00:44:13.660 "state": "completed", 00:44:13.660 "digest": "sha384", 00:44:13.660 "dhgroup": "null" 00:44:13.660 } 00:44:13.660 } 00:44:13.660 ]' 00:44:13.660 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:13.660 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:13.660 03:40:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:13.660 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:44:13.660 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:13.919 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:13.919 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:13.919 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:13.919 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:44:14.486 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:14.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:14.486 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:14.486 03:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:14.486 03:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:14.486 03:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:14.486 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:14.486 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:44:14.486 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:44:14.745 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:44:14.745 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:14.745 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:14.745 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:44:14.745 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:44:14.745 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:14.745 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:14.745 03:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:14.745 03:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:14.745 03:40:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:14.745 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:14.745 03:40:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:15.005 00:44:15.005 03:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:15.005 03:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:15.005 03:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:15.005 03:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:15.005 03:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:15.005 03:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:15.005 03:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:15.005 03:40:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:15.005 03:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:15.005 { 00:44:15.005 "cntlid": 51, 00:44:15.005 "qid": 0, 00:44:15.005 "state": "enabled", 00:44:15.005 "listen_address": { 00:44:15.005 "trtype": "TCP", 00:44:15.005 "adrfam": "IPv4", 00:44:15.005 "traddr": "10.0.0.2", 00:44:15.005 "trsvcid": "4420" 00:44:15.005 }, 00:44:15.005 "peer_address": { 00:44:15.005 "trtype": "TCP", 00:44:15.005 "adrfam": "IPv4", 00:44:15.005 "traddr": "10.0.0.1", 00:44:15.005 "trsvcid": "35538" 00:44:15.005 }, 00:44:15.005 "auth": { 00:44:15.005 "state": "completed", 00:44:15.005 "digest": "sha384", 00:44:15.005 "dhgroup": "null" 00:44:15.005 } 00:44:15.005 } 00:44:15.005 ]' 00:44:15.005 03:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:15.264 03:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:15.264 03:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:15.264 03:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:44:15.264 03:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:15.264 03:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:15.264 03:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:15.264 03:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:15.523 03:40:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:16.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:16.091 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:16.350 00:44:16.350 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:16.350 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:16.350 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:16.609 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:16.609 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:16.609 03:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:16.609 03:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:16.609 03:40:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:16.609 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:16.609 { 00:44:16.609 "cntlid": 53, 00:44:16.609 "qid": 0, 00:44:16.609 "state": "enabled", 00:44:16.609 "listen_address": { 00:44:16.609 "trtype": "TCP", 00:44:16.609 "adrfam": "IPv4", 00:44:16.609 "traddr": "10.0.0.2", 00:44:16.609 "trsvcid": "4420" 00:44:16.609 }, 00:44:16.609 "peer_address": { 00:44:16.609 "trtype": "TCP", 00:44:16.609 "adrfam": "IPv4", 00:44:16.609 "traddr": "10.0.0.1", 00:44:16.609 "trsvcid": "35560" 00:44:16.609 }, 00:44:16.609 "auth": { 00:44:16.609 "state": "completed", 00:44:16.609 "digest": "sha384", 00:44:16.609 "dhgroup": "null" 00:44:16.609 } 00:44:16.609 } 00:44:16.609 ]' 00:44:16.609 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:16.609 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:16.609 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:16.609 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:44:16.609 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:16.609 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:16.609 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:16.609 03:40:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:16.868 03:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:44:17.436 03:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:17.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:17.436 03:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:17.436 03:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.436 03:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:17.436 03:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.436 03:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:17.436 03:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:44:17.436 03:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:44:17.696 03:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:44:17.696 03:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:17.696 03:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:17.696 03:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:44:17.696 03:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:44:17.696 03:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:17.696 03:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:44:17.696 03:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.696 03:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:17.696 03:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.696 03:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:17.696 03:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:17.955 00:44:17.955 03:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:17.955 03:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:17.955 03:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:17.955 03:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:17.955 03:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:17.955 03:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:17.955 03:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:17.955 03:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:17.955 03:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:17.955 { 00:44:17.955 "cntlid": 55, 00:44:17.955 "qid": 0, 00:44:17.955 "state": "enabled", 00:44:17.955 "listen_address": { 00:44:17.955 "trtype": "TCP", 00:44:17.955 "adrfam": "IPv4", 00:44:17.955 "traddr": "10.0.0.2", 00:44:17.955 "trsvcid": "4420" 00:44:17.955 }, 00:44:17.955 "peer_address": { 00:44:17.955 "trtype": "TCP", 00:44:17.955 "adrfam": "IPv4", 00:44:17.955 "traddr": "10.0.0.1", 00:44:17.955 "trsvcid": "35596" 00:44:17.955 }, 00:44:17.955 "auth": { 00:44:17.955 "state": "completed", 00:44:17.955 "digest": "sha384", 00:44:17.955 "dhgroup": "null" 00:44:17.955 } 00:44:17.955 } 00:44:17.955 ]' 00:44:17.955 03:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:18.214 03:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:18.214 03:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:18.214 03:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:44:18.214 03:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:18.214 03:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:18.214 03:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:18.214 03:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:18.473 03:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:19.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:19.042 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:19.301 00:44:19.301 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:19.301 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:19.301 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:19.560 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:19.560 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:19.560 03:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:19.560 03:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:19.560 03:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:19.560 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:19.560 { 00:44:19.560 "cntlid": 57, 00:44:19.560 "qid": 0, 00:44:19.560 "state": "enabled", 00:44:19.560 "listen_address": { 00:44:19.560 "trtype": "TCP", 00:44:19.560 "adrfam": "IPv4", 00:44:19.560 "traddr": "10.0.0.2", 00:44:19.560 "trsvcid": "4420" 00:44:19.560 }, 00:44:19.560 "peer_address": { 00:44:19.560 "trtype": "TCP", 00:44:19.560 "adrfam": "IPv4", 00:44:19.560 "traddr": "10.0.0.1", 00:44:19.560 "trsvcid": "35624" 00:44:19.560 }, 00:44:19.560 "auth": { 00:44:19.560 "state": "completed", 00:44:19.560 "digest": "sha384", 00:44:19.560 "dhgroup": "ffdhe2048" 00:44:19.560 } 00:44:19.560 } 00:44:19.560 ]' 00:44:19.560 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:19.560 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:19.560 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:19.560 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:19.560 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:19.560 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:19.560 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:19.560 03:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:19.819 03:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:44:20.386 03:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:20.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:20.386 03:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:20.386 03:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:20.386 03:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:20.386 03:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:20.386 03:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:20.386 03:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:44:20.386 03:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:44:20.646 03:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:44:20.646 03:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:20.646 03:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:20.646 03:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:44:20.646 03:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:44:20.646 03:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:20.646 03:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:20.646 03:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:20.646 03:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:20.646 03:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:20.646 03:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:20.646 03:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:20.906 00:44:20.906 03:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:20.906 03:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:20.906 03:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:20.906 03:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:20.906 03:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:20.906 03:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:20.906 03:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:20.906 03:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:20.906 03:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:20.906 { 00:44:20.906 "cntlid": 59, 00:44:20.906 "qid": 0, 00:44:20.906 "state": "enabled", 00:44:20.906 "listen_address": { 00:44:20.906 "trtype": "TCP", 00:44:20.906 "adrfam": "IPv4", 00:44:20.906 "traddr": "10.0.0.2", 00:44:20.906 "trsvcid": "4420" 00:44:20.907 }, 00:44:20.907 "peer_address": { 00:44:20.907 "trtype": "TCP", 00:44:20.907 "adrfam": "IPv4", 00:44:20.907 "traddr": "10.0.0.1", 00:44:20.907 "trsvcid": "35642" 00:44:20.907 }, 00:44:20.907 "auth": { 00:44:20.907 "state": "completed", 00:44:20.907 "digest": "sha384", 00:44:20.907 "dhgroup": "ffdhe2048" 00:44:20.907 } 00:44:20.907 } 00:44:20.907 ]' 00:44:20.907 03:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:21.166 03:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:21.166 03:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:21.166 03:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:21.166 03:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:21.166 03:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:21.166 03:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:21.166 03:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:21.166 03:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:44:21.734 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:21.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:21.734 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:21.734 03:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:21.734 03:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:21.734 03:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:21.734 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:21.734 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:44:21.734 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:44:21.993 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:44:21.993 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:21.993 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:21.993 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:44:21.993 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:44:21.993 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:21.993 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:21.993 03:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:21.993 03:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:21.993 03:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:21.993 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:21.993 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:22.252 00:44:22.252 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:22.252 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:22.252 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:22.511 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:22.511 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:22.511 03:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:22.511 03:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:22.511 03:41:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:22.511 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:22.511 { 00:44:22.511 "cntlid": 61, 00:44:22.511 "qid": 0, 00:44:22.511 "state": "enabled", 00:44:22.511 "listen_address": { 00:44:22.511 "trtype": "TCP", 00:44:22.511 "adrfam": "IPv4", 00:44:22.511 "traddr": "10.0.0.2", 00:44:22.511 "trsvcid": "4420" 00:44:22.511 }, 00:44:22.511 "peer_address": { 00:44:22.511 "trtype": "TCP", 00:44:22.511 "adrfam": "IPv4", 00:44:22.511 "traddr": "10.0.0.1", 00:44:22.511 "trsvcid": "41958" 00:44:22.511 }, 00:44:22.511 "auth": { 00:44:22.511 "state": "completed", 00:44:22.511 "digest": "sha384", 00:44:22.511 "dhgroup": "ffdhe2048" 00:44:22.511 } 00:44:22.511 } 00:44:22.511 ]' 00:44:22.511 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:22.511 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:22.511 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:22.511 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:22.511 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:22.511 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:22.511 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:22.511 03:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:22.770 03:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:44:23.370 03:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:23.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:23.370 03:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:23.370 03:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:23.370 03:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:23.370 03:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:23.370 03:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:23.370 03:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:44:23.370 03:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:44:23.629 03:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:44:23.629 03:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:23.629 03:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:23.629 03:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:44:23.629 03:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:44:23.629 03:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:23.629 03:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:44:23.629 03:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:23.629 03:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:23.629 03:41:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:23.629 03:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:23.629 03:41:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:23.629 00:44:23.888 03:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:23.888 03:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:23.888 03:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:23.888 03:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:23.888 03:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:23.888 03:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:23.888 03:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:23.888 03:41:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:23.888 03:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:23.888 { 00:44:23.888 "cntlid": 63, 00:44:23.888 "qid": 0, 00:44:23.888 "state": "enabled", 00:44:23.888 "listen_address": { 00:44:23.888 "trtype": "TCP", 00:44:23.888 "adrfam": "IPv4", 00:44:23.888 "traddr": "10.0.0.2", 00:44:23.888 "trsvcid": "4420" 00:44:23.888 }, 00:44:23.888 "peer_address": { 00:44:23.888 "trtype": "TCP", 00:44:23.888 "adrfam": "IPv4", 00:44:23.888 "traddr": "10.0.0.1", 00:44:23.888 "trsvcid": "41974" 00:44:23.888 }, 00:44:23.888 "auth": { 00:44:23.888 "state": "completed", 00:44:23.888 "digest": "sha384", 00:44:23.888 "dhgroup": "ffdhe2048" 00:44:23.888 } 00:44:23.888 } 00:44:23.888 ]' 00:44:23.888 03:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:23.888 03:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:23.888 03:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:23.888 03:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:23.888 03:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:24.147 03:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:24.147 03:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:24.147 03:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:24.147 03:41:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:44:24.714 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:24.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:24.715 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:24.715 03:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:24.715 03:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:24.715 03:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:24.715 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:44:24.715 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:24.715 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:44:24.715 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:44:24.973 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:44:24.973 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:24.973 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:24.973 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:44:24.973 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:44:24.973 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:24.973 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:24.973 03:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:24.973 03:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:24.973 03:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:24.973 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:24.973 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:25.232 00:44:25.232 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:25.232 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:25.232 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:25.491 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:25.491 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:25.491 03:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:25.491 03:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:25.491 03:41:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:25.491 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:25.491 { 00:44:25.491 "cntlid": 65, 00:44:25.491 "qid": 0, 00:44:25.491 "state": "enabled", 00:44:25.491 "listen_address": { 00:44:25.491 "trtype": "TCP", 00:44:25.491 "adrfam": "IPv4", 00:44:25.491 "traddr": "10.0.0.2", 00:44:25.491 "trsvcid": "4420" 00:44:25.491 }, 00:44:25.491 "peer_address": { 00:44:25.491 "trtype": "TCP", 00:44:25.491 "adrfam": "IPv4", 00:44:25.491 "traddr": "10.0.0.1", 00:44:25.491 "trsvcid": "41988" 00:44:25.491 }, 00:44:25.491 "auth": { 00:44:25.491 "state": "completed", 00:44:25.491 "digest": "sha384", 00:44:25.491 "dhgroup": "ffdhe3072" 00:44:25.491 } 00:44:25.491 } 00:44:25.491 ]' 00:44:25.491 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:25.491 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:25.491 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:25.491 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:44:25.491 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:25.491 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:25.491 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:25.491 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:25.750 03:41:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:26.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:26.316 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:26.575 00:44:26.575 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:26.575 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:26.575 03:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:26.833 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:26.833 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:26.833 03:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:26.833 03:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:26.833 03:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:26.833 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:26.833 { 00:44:26.833 "cntlid": 67, 00:44:26.833 "qid": 0, 00:44:26.833 "state": "enabled", 00:44:26.833 "listen_address": { 00:44:26.833 "trtype": "TCP", 00:44:26.833 "adrfam": "IPv4", 00:44:26.833 "traddr": "10.0.0.2", 00:44:26.833 "trsvcid": "4420" 00:44:26.833 }, 00:44:26.833 "peer_address": { 00:44:26.833 "trtype": "TCP", 00:44:26.833 "adrfam": "IPv4", 00:44:26.833 "traddr": "10.0.0.1", 00:44:26.833 "trsvcid": "42014" 00:44:26.833 }, 00:44:26.833 "auth": { 00:44:26.833 "state": "completed", 00:44:26.833 "digest": "sha384", 00:44:26.833 "dhgroup": "ffdhe3072" 00:44:26.833 } 00:44:26.833 } 00:44:26.833 ]' 00:44:26.833 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:26.833 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:26.833 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:26.833 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:44:26.833 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:27.092 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:27.092 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:27.092 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:27.092 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:44:27.659 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:27.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:27.659 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:27.659 03:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:27.659 03:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:27.659 03:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:27.659 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:27.659 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:44:27.659 03:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:44:27.918 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:44:27.918 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:27.918 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:27.918 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:44:27.918 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:44:27.918 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:27.918 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:27.918 03:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:27.918 03:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:27.918 03:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:27.918 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:27.918 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:28.177 00:44:28.177 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:28.177 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:28.177 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:28.436 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:28.436 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:28.436 03:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:28.436 03:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:28.436 03:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:28.436 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:28.436 { 00:44:28.436 "cntlid": 69, 00:44:28.436 "qid": 0, 00:44:28.436 "state": "enabled", 00:44:28.436 "listen_address": { 00:44:28.436 "trtype": "TCP", 00:44:28.436 "adrfam": "IPv4", 00:44:28.436 "traddr": "10.0.0.2", 00:44:28.436 "trsvcid": "4420" 00:44:28.436 }, 00:44:28.436 "peer_address": { 00:44:28.436 "trtype": "TCP", 00:44:28.436 "adrfam": "IPv4", 00:44:28.436 "traddr": "10.0.0.1", 00:44:28.436 "trsvcid": "42030" 00:44:28.436 }, 00:44:28.436 "auth": { 00:44:28.436 "state": "completed", 00:44:28.436 "digest": "sha384", 00:44:28.436 "dhgroup": "ffdhe3072" 00:44:28.436 } 00:44:28.436 } 00:44:28.436 ]' 00:44:28.436 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:28.436 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:28.436 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:28.436 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:44:28.436 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:28.436 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:28.436 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:28.436 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:28.695 03:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:29.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:29.262 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:29.520 00:44:29.520 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:29.520 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:29.521 03:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:29.779 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:29.779 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:29.779 03:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:29.779 03:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:29.779 03:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:29.779 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:29.779 { 00:44:29.779 "cntlid": 71, 00:44:29.779 "qid": 0, 00:44:29.779 "state": "enabled", 00:44:29.779 "listen_address": { 00:44:29.779 "trtype": "TCP", 00:44:29.779 "adrfam": "IPv4", 00:44:29.779 "traddr": "10.0.0.2", 00:44:29.779 "trsvcid": "4420" 00:44:29.779 }, 00:44:29.779 "peer_address": { 00:44:29.779 "trtype": "TCP", 00:44:29.779 "adrfam": "IPv4", 00:44:29.779 "traddr": "10.0.0.1", 00:44:29.779 "trsvcid": "42056" 00:44:29.779 }, 00:44:29.780 "auth": { 00:44:29.780 "state": "completed", 00:44:29.780 "digest": "sha384", 00:44:29.780 "dhgroup": "ffdhe3072" 00:44:29.780 } 00:44:29.780 } 00:44:29.780 ]' 00:44:29.780 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:29.780 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:29.780 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:29.780 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:44:29.780 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:29.780 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:29.780 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:29.780 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:30.039 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:44:30.606 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:30.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:30.606 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:30.606 03:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:30.606 03:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:30.606 03:41:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:30.606 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:44:30.606 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:30.606 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:44:30.606 03:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:44:30.867 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:44:30.867 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:30.867 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:30.867 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:44:30.867 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:44:30.867 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:30.867 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:30.867 03:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:30.867 03:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:30.867 03:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:30.867 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:30.867 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:31.127 00:44:31.127 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:31.127 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:31.127 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:31.127 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:31.127 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:31.127 03:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:31.127 03:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:31.127 03:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:31.127 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:31.127 { 00:44:31.127 "cntlid": 73, 00:44:31.127 "qid": 0, 00:44:31.127 "state": "enabled", 00:44:31.127 "listen_address": { 00:44:31.127 "trtype": "TCP", 00:44:31.127 "adrfam": "IPv4", 00:44:31.127 "traddr": "10.0.0.2", 00:44:31.127 "trsvcid": "4420" 00:44:31.127 }, 00:44:31.127 "peer_address": { 00:44:31.127 "trtype": "TCP", 00:44:31.127 "adrfam": "IPv4", 00:44:31.127 "traddr": "10.0.0.1", 00:44:31.127 "trsvcid": "42066" 00:44:31.127 }, 00:44:31.127 "auth": { 00:44:31.127 "state": "completed", 00:44:31.127 "digest": "sha384", 00:44:31.127 "dhgroup": "ffdhe4096" 00:44:31.127 } 00:44:31.127 } 00:44:31.127 ]' 00:44:31.127 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:31.386 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:31.386 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:31.386 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:44:31.386 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:31.386 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:31.386 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:31.386 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:31.645 03:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:32.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:32.213 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:32.471 00:44:32.471 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:32.471 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:32.471 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:32.729 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:32.729 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:32.729 03:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:32.729 03:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:32.729 03:41:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:32.729 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:32.729 { 00:44:32.729 "cntlid": 75, 00:44:32.729 "qid": 0, 00:44:32.729 "state": "enabled", 00:44:32.729 "listen_address": { 00:44:32.729 "trtype": "TCP", 00:44:32.729 "adrfam": "IPv4", 00:44:32.729 "traddr": "10.0.0.2", 00:44:32.729 "trsvcid": "4420" 00:44:32.729 }, 00:44:32.729 "peer_address": { 00:44:32.729 "trtype": "TCP", 00:44:32.729 "adrfam": "IPv4", 00:44:32.729 "traddr": "10.0.0.1", 00:44:32.729 "trsvcid": "41952" 00:44:32.729 }, 00:44:32.729 "auth": { 00:44:32.730 "state": "completed", 00:44:32.730 "digest": "sha384", 00:44:32.730 "dhgroup": "ffdhe4096" 00:44:32.730 } 00:44:32.730 } 00:44:32.730 ]' 00:44:32.730 03:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:32.730 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:32.730 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:32.730 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:44:32.730 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:32.730 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:32.730 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:32.730 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:32.988 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:44:33.556 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:33.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:33.556 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:33.556 03:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:33.556 03:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:33.556 03:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:33.556 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:33.556 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:44:33.556 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:44:33.815 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:44:33.816 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:33.816 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:33.816 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:44:33.816 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:44:33.816 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:33.816 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:33.816 03:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:33.816 03:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:33.816 03:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:33.816 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:33.816 03:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:34.075 00:44:34.075 03:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:34.075 03:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:34.075 03:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:34.075 03:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:34.075 03:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:34.075 03:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:34.075 03:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:34.075 03:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:34.075 03:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:34.075 { 00:44:34.075 "cntlid": 77, 00:44:34.075 "qid": 0, 00:44:34.075 "state": "enabled", 00:44:34.075 "listen_address": { 00:44:34.075 "trtype": "TCP", 00:44:34.075 "adrfam": "IPv4", 00:44:34.075 "traddr": "10.0.0.2", 00:44:34.075 "trsvcid": "4420" 00:44:34.075 }, 00:44:34.075 "peer_address": { 00:44:34.075 "trtype": "TCP", 00:44:34.075 "adrfam": "IPv4", 00:44:34.075 "traddr": "10.0.0.1", 00:44:34.075 "trsvcid": "41984" 00:44:34.075 }, 00:44:34.075 "auth": { 00:44:34.075 "state": "completed", 00:44:34.075 "digest": "sha384", 00:44:34.075 "dhgroup": "ffdhe4096" 00:44:34.075 } 00:44:34.075 } 00:44:34.075 ]' 00:44:34.075 03:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:34.334 03:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:34.334 03:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:34.334 03:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:44:34.334 03:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:34.334 03:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:34.334 03:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:34.334 03:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:34.592 03:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:35.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:35.160 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:35.418 00:44:35.418 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:35.418 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:35.418 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:35.678 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:35.678 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:35.678 03:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:35.678 03:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:35.678 03:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:35.678 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:35.678 { 00:44:35.678 "cntlid": 79, 00:44:35.678 "qid": 0, 00:44:35.678 "state": "enabled", 00:44:35.678 "listen_address": { 00:44:35.678 "trtype": "TCP", 00:44:35.678 "adrfam": "IPv4", 00:44:35.678 "traddr": "10.0.0.2", 00:44:35.678 "trsvcid": "4420" 00:44:35.678 }, 00:44:35.678 "peer_address": { 00:44:35.678 "trtype": "TCP", 00:44:35.678 "adrfam": "IPv4", 00:44:35.678 "traddr": "10.0.0.1", 00:44:35.678 "trsvcid": "42006" 00:44:35.678 }, 00:44:35.678 "auth": { 00:44:35.678 "state": "completed", 00:44:35.678 "digest": "sha384", 00:44:35.678 "dhgroup": "ffdhe4096" 00:44:35.678 } 00:44:35.678 } 00:44:35.678 ]' 00:44:35.678 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:35.678 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:35.678 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:35.678 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:44:35.678 03:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:35.678 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:35.678 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:35.678 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:35.937 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:44:36.503 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:36.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:36.503 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:36.503 03:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:36.503 03:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:36.503 03:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:36.503 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:44:36.503 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:36.503 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:44:36.503 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:44:36.762 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:44:36.762 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:36.762 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:36.762 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:44:36.762 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:44:36.762 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:36.762 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:36.762 03:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:36.762 03:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:36.762 03:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:36.762 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:36.762 03:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:37.021 00:44:37.021 03:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:37.021 03:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:37.021 03:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:37.280 03:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:37.280 03:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:37.280 03:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:37.280 03:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:37.280 03:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:37.280 03:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:37.280 { 00:44:37.280 "cntlid": 81, 00:44:37.280 "qid": 0, 00:44:37.280 "state": "enabled", 00:44:37.280 "listen_address": { 00:44:37.280 "trtype": "TCP", 00:44:37.280 "adrfam": "IPv4", 00:44:37.280 "traddr": "10.0.0.2", 00:44:37.280 "trsvcid": "4420" 00:44:37.280 }, 00:44:37.280 "peer_address": { 00:44:37.280 "trtype": "TCP", 00:44:37.280 "adrfam": "IPv4", 00:44:37.280 "traddr": "10.0.0.1", 00:44:37.280 "trsvcid": "42018" 00:44:37.280 }, 00:44:37.280 "auth": { 00:44:37.280 "state": "completed", 00:44:37.280 "digest": "sha384", 00:44:37.280 "dhgroup": "ffdhe6144" 00:44:37.280 } 00:44:37.280 } 00:44:37.280 ]' 00:44:37.280 03:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:37.280 03:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:37.280 03:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:37.280 03:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:37.280 03:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:37.280 03:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:37.280 03:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:37.280 03:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:37.539 03:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:38.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:38.108 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:38.676 00:44:38.676 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:38.676 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:38.677 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:38.677 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:38.677 03:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:38.677 03:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:38.677 03:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:38.677 03:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:38.677 03:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:38.677 { 00:44:38.677 "cntlid": 83, 00:44:38.677 "qid": 0, 00:44:38.677 "state": "enabled", 00:44:38.677 "listen_address": { 00:44:38.677 "trtype": "TCP", 00:44:38.677 "adrfam": "IPv4", 00:44:38.677 "traddr": "10.0.0.2", 00:44:38.677 "trsvcid": "4420" 00:44:38.677 }, 00:44:38.677 "peer_address": { 00:44:38.677 "trtype": "TCP", 00:44:38.677 "adrfam": "IPv4", 00:44:38.677 "traddr": "10.0.0.1", 00:44:38.677 "trsvcid": "42048" 00:44:38.677 }, 00:44:38.677 "auth": { 00:44:38.677 "state": "completed", 00:44:38.677 "digest": "sha384", 00:44:38.677 "dhgroup": "ffdhe6144" 00:44:38.677 } 00:44:38.677 } 00:44:38.677 ]' 00:44:38.677 03:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:38.677 03:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:38.677 03:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:38.935 03:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:38.936 03:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:38.936 03:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:38.936 03:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:38.936 03:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:38.936 03:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:44:39.502 03:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:39.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:39.502 03:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:39.502 03:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:39.502 03:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:39.502 03:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:39.502 03:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:39.502 03:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:44:39.502 03:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:44:39.760 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:44:39.760 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:39.760 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:39.760 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:44:39.760 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:44:39.760 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:39.760 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:39.760 03:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:39.760 03:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:39.760 03:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:39.760 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:39.760 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:40.050 00:44:40.050 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:40.050 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:40.050 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:40.308 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:40.309 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:40.309 03:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:40.309 03:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:40.309 03:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:40.309 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:40.309 { 00:44:40.309 "cntlid": 85, 00:44:40.309 "qid": 0, 00:44:40.309 "state": "enabled", 00:44:40.309 "listen_address": { 00:44:40.309 "trtype": "TCP", 00:44:40.309 "adrfam": "IPv4", 00:44:40.309 "traddr": "10.0.0.2", 00:44:40.309 "trsvcid": "4420" 00:44:40.309 }, 00:44:40.309 "peer_address": { 00:44:40.309 "trtype": "TCP", 00:44:40.309 "adrfam": "IPv4", 00:44:40.309 "traddr": "10.0.0.1", 00:44:40.309 "trsvcid": "42068" 00:44:40.309 }, 00:44:40.309 "auth": { 00:44:40.309 "state": "completed", 00:44:40.309 "digest": "sha384", 00:44:40.309 "dhgroup": "ffdhe6144" 00:44:40.309 } 00:44:40.309 } 00:44:40.309 ]' 00:44:40.309 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:40.309 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:40.309 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:40.309 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:40.309 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:40.567 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:40.567 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:40.567 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:40.567 03:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:44:41.135 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:41.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:41.135 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:41.135 03:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:41.135 03:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:41.135 03:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:41.135 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:41.135 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:44:41.135 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:44:41.394 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:44:41.394 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:41.394 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:41.394 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:44:41.394 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:44:41.394 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:41.394 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:44:41.394 03:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:41.394 03:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:41.394 03:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:41.394 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:41.394 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:41.653 00:44:41.653 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:41.653 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:41.653 03:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:41.911 03:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:41.911 03:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:41.911 03:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:41.911 03:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:41.911 03:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:41.911 03:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:41.911 { 00:44:41.911 "cntlid": 87, 00:44:41.911 "qid": 0, 00:44:41.911 "state": "enabled", 00:44:41.911 "listen_address": { 00:44:41.911 "trtype": "TCP", 00:44:41.911 "adrfam": "IPv4", 00:44:41.911 "traddr": "10.0.0.2", 00:44:41.911 "trsvcid": "4420" 00:44:41.911 }, 00:44:41.911 "peer_address": { 00:44:41.911 "trtype": "TCP", 00:44:41.911 "adrfam": "IPv4", 00:44:41.911 "traddr": "10.0.0.1", 00:44:41.911 "trsvcid": "43536" 00:44:41.911 }, 00:44:41.911 "auth": { 00:44:41.911 "state": "completed", 00:44:41.911 "digest": "sha384", 00:44:41.911 "dhgroup": "ffdhe6144" 00:44:41.911 } 00:44:41.911 } 00:44:41.911 ]' 00:44:41.911 03:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:41.911 03:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:41.911 03:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:41.911 03:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:44:41.911 03:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:41.911 03:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:41.911 03:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:41.911 03:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:42.170 03:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:44:42.738 03:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:42.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:42.738 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:42.738 03:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:42.738 03:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:42.738 03:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:42.738 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:44:42.738 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:42.738 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:44:42.738 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:44:42.997 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:44:42.997 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:42.997 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:42.997 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:44:42.997 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:44:42.997 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:42.997 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:42.997 03:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:42.997 03:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:42.997 03:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:42.997 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:42.997 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:43.257 00:44:43.257 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:43.257 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:43.516 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:43.516 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:43.516 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:43.516 03:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:43.516 03:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:43.516 03:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:43.516 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:43.516 { 00:44:43.516 "cntlid": 89, 00:44:43.516 "qid": 0, 00:44:43.516 "state": "enabled", 00:44:43.516 "listen_address": { 00:44:43.516 "trtype": "TCP", 00:44:43.516 "adrfam": "IPv4", 00:44:43.516 "traddr": "10.0.0.2", 00:44:43.516 "trsvcid": "4420" 00:44:43.516 }, 00:44:43.516 "peer_address": { 00:44:43.516 "trtype": "TCP", 00:44:43.516 "adrfam": "IPv4", 00:44:43.516 "traddr": "10.0.0.1", 00:44:43.516 "trsvcid": "43566" 00:44:43.516 }, 00:44:43.516 "auth": { 00:44:43.516 "state": "completed", 00:44:43.516 "digest": "sha384", 00:44:43.516 "dhgroup": "ffdhe8192" 00:44:43.516 } 00:44:43.516 } 00:44:43.516 ]' 00:44:43.516 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:43.516 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:43.516 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:43.775 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:43.775 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:43.775 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:43.775 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:43.775 03:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:43.775 03:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:44:44.342 03:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:44.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:44.342 03:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:44.342 03:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:44.342 03:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:44.342 03:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:44.342 03:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:44.342 03:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:44:44.342 03:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:44:44.603 03:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:44:44.603 03:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:44.603 03:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:44.603 03:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:44:44.603 03:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:44:44.603 03:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:44.603 03:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:44.603 03:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:44.603 03:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:44.603 03:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:44.603 03:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:44.603 03:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:45.170 00:44:45.170 03:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:45.170 03:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:45.170 03:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:45.170 03:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:45.170 03:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:45.170 03:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:45.170 03:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:45.170 03:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:45.170 03:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:45.170 { 00:44:45.170 "cntlid": 91, 00:44:45.170 "qid": 0, 00:44:45.170 "state": "enabled", 00:44:45.170 "listen_address": { 00:44:45.170 "trtype": "TCP", 00:44:45.170 "adrfam": "IPv4", 00:44:45.170 "traddr": "10.0.0.2", 00:44:45.170 "trsvcid": "4420" 00:44:45.170 }, 00:44:45.170 "peer_address": { 00:44:45.170 "trtype": "TCP", 00:44:45.170 "adrfam": "IPv4", 00:44:45.170 "traddr": "10.0.0.1", 00:44:45.170 "trsvcid": "43576" 00:44:45.170 }, 00:44:45.170 "auth": { 00:44:45.170 "state": "completed", 00:44:45.170 "digest": "sha384", 00:44:45.170 "dhgroup": "ffdhe8192" 00:44:45.170 } 00:44:45.170 } 00:44:45.170 ]' 00:44:45.170 03:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:45.429 03:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:45.429 03:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:45.429 03:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:45.429 03:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:45.429 03:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:45.429 03:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:45.430 03:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:45.689 03:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:46.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:46.256 03:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:46.822 00:44:46.822 03:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:46.822 03:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:46.822 03:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:47.081 03:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:47.081 03:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:47.081 03:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:47.081 03:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:47.081 03:41:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:47.081 03:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:47.081 { 00:44:47.081 "cntlid": 93, 00:44:47.081 "qid": 0, 00:44:47.081 "state": "enabled", 00:44:47.081 "listen_address": { 00:44:47.081 "trtype": "TCP", 00:44:47.081 "adrfam": "IPv4", 00:44:47.081 "traddr": "10.0.0.2", 00:44:47.081 "trsvcid": "4420" 00:44:47.081 }, 00:44:47.081 "peer_address": { 00:44:47.081 "trtype": "TCP", 00:44:47.081 "adrfam": "IPv4", 00:44:47.081 "traddr": "10.0.0.1", 00:44:47.081 "trsvcid": "43604" 00:44:47.081 }, 00:44:47.081 "auth": { 00:44:47.081 "state": "completed", 00:44:47.081 "digest": "sha384", 00:44:47.081 "dhgroup": "ffdhe8192" 00:44:47.081 } 00:44:47.081 } 00:44:47.081 ]' 00:44:47.081 03:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:47.081 03:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:47.081 03:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:47.081 03:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:47.081 03:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:47.081 03:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:47.081 03:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:47.081 03:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:47.340 03:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:47.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:47.909 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:48.475 00:44:48.475 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:48.475 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:48.475 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:48.733 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:48.733 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:48.733 03:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:48.733 03:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:48.733 03:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:48.733 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:48.733 { 00:44:48.733 "cntlid": 95, 00:44:48.733 "qid": 0, 00:44:48.733 "state": "enabled", 00:44:48.733 "listen_address": { 00:44:48.733 "trtype": "TCP", 00:44:48.733 "adrfam": "IPv4", 00:44:48.733 "traddr": "10.0.0.2", 00:44:48.733 "trsvcid": "4420" 00:44:48.733 }, 00:44:48.733 "peer_address": { 00:44:48.733 "trtype": "TCP", 00:44:48.733 "adrfam": "IPv4", 00:44:48.733 "traddr": "10.0.0.1", 00:44:48.733 "trsvcid": "43642" 00:44:48.733 }, 00:44:48.733 "auth": { 00:44:48.733 "state": "completed", 00:44:48.733 "digest": "sha384", 00:44:48.733 "dhgroup": "ffdhe8192" 00:44:48.733 } 00:44:48.733 } 00:44:48.733 ]' 00:44:48.733 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:48.733 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:44:48.733 03:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:48.733 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:44:48.733 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:48.733 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:48.733 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:48.733 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:48.992 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:44:49.560 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:49.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:49.560 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:49.560 03:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:49.560 03:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:49.560 03:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:49.560 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:44:49.560 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:44:49.560 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:49.560 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:49.560 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:49.819 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:44:49.819 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:49.819 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:44:49.819 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:44:49.819 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:44:49.819 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:49.819 03:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:49.819 03:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:49.819 03:41:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:49.819 03:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:49.819 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:49.819 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:49.819 00:44:50.078 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:50.078 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:50.078 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:50.078 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:50.078 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:50.078 03:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:50.078 03:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.078 03:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:50.078 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:50.078 { 00:44:50.078 "cntlid": 97, 00:44:50.078 "qid": 0, 00:44:50.078 "state": "enabled", 00:44:50.078 "listen_address": { 00:44:50.078 "trtype": "TCP", 00:44:50.078 "adrfam": "IPv4", 00:44:50.078 "traddr": "10.0.0.2", 00:44:50.078 "trsvcid": "4420" 00:44:50.078 }, 00:44:50.078 "peer_address": { 00:44:50.078 "trtype": "TCP", 00:44:50.078 "adrfam": "IPv4", 00:44:50.078 "traddr": "10.0.0.1", 00:44:50.078 "trsvcid": "43658" 00:44:50.078 }, 00:44:50.078 "auth": { 00:44:50.078 "state": "completed", 00:44:50.078 "digest": "sha512", 00:44:50.078 "dhgroup": "null" 00:44:50.078 } 00:44:50.078 } 00:44:50.078 ]' 00:44:50.078 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:50.078 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:50.078 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:50.337 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:44:50.337 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:50.337 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:50.337 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:50.337 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:50.337 03:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:44:50.904 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:50.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:50.904 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:50.904 03:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:50.904 03:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:50.904 03:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:50.904 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:50.904 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:50.904 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:51.163 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:44:51.163 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:51.163 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:44:51.163 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:44:51.163 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:44:51.163 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:51.163 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:51.163 03:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:51.163 03:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:51.163 03:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:51.163 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:51.163 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:51.422 00:44:51.422 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:51.422 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:51.422 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:51.681 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:51.681 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:51.681 03:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:51.681 03:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:51.681 03:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:51.681 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:51.681 { 00:44:51.681 "cntlid": 99, 00:44:51.681 "qid": 0, 00:44:51.681 "state": "enabled", 00:44:51.681 "listen_address": { 00:44:51.681 "trtype": "TCP", 00:44:51.681 "adrfam": "IPv4", 00:44:51.681 "traddr": "10.0.0.2", 00:44:51.681 "trsvcid": "4420" 00:44:51.681 }, 00:44:51.681 "peer_address": { 00:44:51.681 "trtype": "TCP", 00:44:51.681 "adrfam": "IPv4", 00:44:51.681 "traddr": "10.0.0.1", 00:44:51.681 "trsvcid": "52508" 00:44:51.681 }, 00:44:51.681 "auth": { 00:44:51.681 "state": "completed", 00:44:51.681 "digest": "sha512", 00:44:51.681 "dhgroup": "null" 00:44:51.681 } 00:44:51.681 } 00:44:51.681 ]' 00:44:51.681 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:51.681 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:51.681 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:51.681 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:44:51.681 03:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:51.681 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:51.681 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:51.681 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:51.940 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:44:52.506 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:52.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:52.506 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:52.506 03:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:52.506 03:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:52.506 03:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:52.506 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:52.506 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:52.506 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:52.765 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:44:52.765 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:52.765 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:44:52.765 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:44:52.765 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:44:52.765 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:52.765 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:52.765 03:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:52.765 03:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:52.765 03:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:52.765 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:52.765 03:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:52.765 00:44:52.765 03:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:52.765 03:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:52.765 03:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:53.024 03:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:53.024 03:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:53.024 03:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:53.024 03:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:53.024 03:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:53.024 03:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:53.024 { 00:44:53.024 "cntlid": 101, 00:44:53.024 "qid": 0, 00:44:53.024 "state": "enabled", 00:44:53.024 "listen_address": { 00:44:53.025 "trtype": "TCP", 00:44:53.025 "adrfam": "IPv4", 00:44:53.025 "traddr": "10.0.0.2", 00:44:53.025 "trsvcid": "4420" 00:44:53.025 }, 00:44:53.025 "peer_address": { 00:44:53.025 "trtype": "TCP", 00:44:53.025 "adrfam": "IPv4", 00:44:53.025 "traddr": "10.0.0.1", 00:44:53.025 "trsvcid": "52528" 00:44:53.025 }, 00:44:53.025 "auth": { 00:44:53.025 "state": "completed", 00:44:53.025 "digest": "sha512", 00:44:53.025 "dhgroup": "null" 00:44:53.025 } 00:44:53.025 } 00:44:53.025 ]' 00:44:53.025 03:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:53.025 03:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:53.025 03:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:53.284 03:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:44:53.284 03:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:53.284 03:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:53.284 03:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:53.284 03:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:53.284 03:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:44:53.852 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:53.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:53.852 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:53.852 03:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:53.852 03:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:53.852 03:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:53.852 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:53.852 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:53.852 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:44:54.110 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:44:54.110 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:54.110 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:44:54.110 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:44:54.110 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:44:54.110 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:54.110 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:44:54.110 03:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:54.110 03:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:54.110 03:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:54.110 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:54.110 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:44:54.368 00:44:54.368 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:54.368 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:54.368 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:54.627 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:54.627 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:54.627 03:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:54.627 03:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:54.627 03:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:54.627 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:54.627 { 00:44:54.627 "cntlid": 103, 00:44:54.627 "qid": 0, 00:44:54.627 "state": "enabled", 00:44:54.627 "listen_address": { 00:44:54.627 "trtype": "TCP", 00:44:54.627 "adrfam": "IPv4", 00:44:54.627 "traddr": "10.0.0.2", 00:44:54.627 "trsvcid": "4420" 00:44:54.627 }, 00:44:54.627 "peer_address": { 00:44:54.627 "trtype": "TCP", 00:44:54.627 "adrfam": "IPv4", 00:44:54.627 "traddr": "10.0.0.1", 00:44:54.627 "trsvcid": "52564" 00:44:54.627 }, 00:44:54.627 "auth": { 00:44:54.627 "state": "completed", 00:44:54.627 "digest": "sha512", 00:44:54.627 "dhgroup": "null" 00:44:54.627 } 00:44:54.627 } 00:44:54.627 ]' 00:44:54.627 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:54.627 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:54.627 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:54.627 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:44:54.627 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:54.627 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:54.627 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:54.627 03:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:54.886 03:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:55.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:55.455 03:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:44:55.714 00:44:55.714 03:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:55.714 03:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:55.714 03:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:55.972 03:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:55.972 03:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:55.972 03:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:55.972 03:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:55.972 03:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:55.972 03:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:55.972 { 00:44:55.972 "cntlid": 105, 00:44:55.972 "qid": 0, 00:44:55.972 "state": "enabled", 00:44:55.972 "listen_address": { 00:44:55.972 "trtype": "TCP", 00:44:55.972 "adrfam": "IPv4", 00:44:55.972 "traddr": "10.0.0.2", 00:44:55.972 "trsvcid": "4420" 00:44:55.972 }, 00:44:55.972 "peer_address": { 00:44:55.972 "trtype": "TCP", 00:44:55.972 "adrfam": "IPv4", 00:44:55.972 "traddr": "10.0.0.1", 00:44:55.972 "trsvcid": "52596" 00:44:55.972 }, 00:44:55.972 "auth": { 00:44:55.972 "state": "completed", 00:44:55.972 "digest": "sha512", 00:44:55.972 "dhgroup": "ffdhe2048" 00:44:55.972 } 00:44:55.972 } 00:44:55.972 ]' 00:44:55.972 03:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:55.972 03:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:55.972 03:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:55.972 03:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:55.972 03:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:55.973 03:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:55.973 03:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:55.973 03:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:56.231 03:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:44:56.827 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:56.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:56.827 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:56.827 03:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:56.828 03:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:56.828 03:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:56.828 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:56.828 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:56.828 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:57.087 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:44:57.087 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:57.087 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:44:57.087 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:44:57.087 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:44:57.087 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:57.087 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:57.087 03:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:57.087 03:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:57.087 03:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:57.087 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:57.087 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:44:57.087 00:44:57.346 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:57.346 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:57.346 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:57.346 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:57.346 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:57.346 03:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:57.346 03:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:57.346 03:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:57.346 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:57.346 { 00:44:57.346 "cntlid": 107, 00:44:57.346 "qid": 0, 00:44:57.346 "state": "enabled", 00:44:57.346 "listen_address": { 00:44:57.346 "trtype": "TCP", 00:44:57.346 "adrfam": "IPv4", 00:44:57.346 "traddr": "10.0.0.2", 00:44:57.346 "trsvcid": "4420" 00:44:57.346 }, 00:44:57.346 "peer_address": { 00:44:57.346 "trtype": "TCP", 00:44:57.346 "adrfam": "IPv4", 00:44:57.346 "traddr": "10.0.0.1", 00:44:57.346 "trsvcid": "52618" 00:44:57.346 }, 00:44:57.346 "auth": { 00:44:57.346 "state": "completed", 00:44:57.346 "digest": "sha512", 00:44:57.346 "dhgroup": "ffdhe2048" 00:44:57.346 } 00:44:57.346 } 00:44:57.346 ]' 00:44:57.346 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:57.346 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:57.346 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:57.605 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:57.606 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:57.606 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:57.606 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:57.606 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:57.606 03:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:44:58.173 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:58.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:58.173 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:58.173 03:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:58.173 03:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:58.173 03:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:58.173 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:58.173 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:58.173 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:58.432 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:44:58.432 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:44:58.432 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:44:58.432 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:44:58.432 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:44:58.432 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:44:58.432 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:58.432 03:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:58.432 03:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:58.432 03:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:58.432 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:58.432 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:44:58.691 00:44:58.691 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:44:58.691 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:44:58.691 03:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:44:58.949 03:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:44:58.949 03:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:44:58.949 03:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:58.949 03:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:58.949 03:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:58.949 03:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:44:58.949 { 00:44:58.949 "cntlid": 109, 00:44:58.949 "qid": 0, 00:44:58.949 "state": "enabled", 00:44:58.949 "listen_address": { 00:44:58.949 "trtype": "TCP", 00:44:58.949 "adrfam": "IPv4", 00:44:58.949 "traddr": "10.0.0.2", 00:44:58.949 "trsvcid": "4420" 00:44:58.949 }, 00:44:58.949 "peer_address": { 00:44:58.949 "trtype": "TCP", 00:44:58.949 "adrfam": "IPv4", 00:44:58.949 "traddr": "10.0.0.1", 00:44:58.949 "trsvcid": "52644" 00:44:58.949 }, 00:44:58.949 "auth": { 00:44:58.949 "state": "completed", 00:44:58.949 "digest": "sha512", 00:44:58.949 "dhgroup": "ffdhe2048" 00:44:58.949 } 00:44:58.949 } 00:44:58.949 ]' 00:44:58.949 03:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:44:58.949 03:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:44:58.949 03:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:44:58.949 03:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:44:58.949 03:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:44:58.949 03:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:44:58.949 03:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:44:58.949 03:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:44:59.208 03:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:44:59.776 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:44:59.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:44:59.776 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:44:59.776 03:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:44:59.776 03:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:44:59.776 03:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:44:59.776 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:44:59.776 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:44:59.776 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:45:00.035 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:45:00.035 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:00.035 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:00.035 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:45:00.035 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:45:00.035 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:00.035 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:45:00.035 03:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:00.035 03:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:00.035 03:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:00.035 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:00.035 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:00.035 00:45:00.294 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:00.294 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:00.294 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:00.294 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:00.294 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:00.294 03:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:00.294 03:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:00.294 03:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:00.294 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:00.294 { 00:45:00.294 "cntlid": 111, 00:45:00.294 "qid": 0, 00:45:00.294 "state": "enabled", 00:45:00.294 "listen_address": { 00:45:00.294 "trtype": "TCP", 00:45:00.294 "adrfam": "IPv4", 00:45:00.294 "traddr": "10.0.0.2", 00:45:00.294 "trsvcid": "4420" 00:45:00.294 }, 00:45:00.294 "peer_address": { 00:45:00.294 "trtype": "TCP", 00:45:00.294 "adrfam": "IPv4", 00:45:00.294 "traddr": "10.0.0.1", 00:45:00.294 "trsvcid": "52672" 00:45:00.294 }, 00:45:00.294 "auth": { 00:45:00.294 "state": "completed", 00:45:00.294 "digest": "sha512", 00:45:00.294 "dhgroup": "ffdhe2048" 00:45:00.294 } 00:45:00.294 } 00:45:00.294 ]' 00:45:00.294 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:00.294 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:00.294 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:00.554 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:45:00.554 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:00.554 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:00.554 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:00.554 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:00.554 03:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:45:01.122 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:01.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:01.122 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:01.122 03:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:01.122 03:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:01.122 03:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:01.122 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:45:01.122 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:45:01.122 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:45:01.122 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:45:01.381 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:45:01.381 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:01.381 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:01.381 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:45:01.381 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:45:01.381 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:01.381 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:01.381 03:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:01.381 03:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:01.381 03:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:01.381 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:01.381 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:01.639 00:45:01.639 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:01.639 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:01.639 03:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:01.897 03:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:01.897 03:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:01.897 03:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:01.897 03:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:01.897 03:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:01.897 03:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:01.897 { 00:45:01.897 "cntlid": 113, 00:45:01.897 "qid": 0, 00:45:01.897 "state": "enabled", 00:45:01.897 "listen_address": { 00:45:01.897 "trtype": "TCP", 00:45:01.897 "adrfam": "IPv4", 00:45:01.897 "traddr": "10.0.0.2", 00:45:01.897 "trsvcid": "4420" 00:45:01.897 }, 00:45:01.897 "peer_address": { 00:45:01.897 "trtype": "TCP", 00:45:01.897 "adrfam": "IPv4", 00:45:01.897 "traddr": "10.0.0.1", 00:45:01.897 "trsvcid": "35366" 00:45:01.897 }, 00:45:01.897 "auth": { 00:45:01.897 "state": "completed", 00:45:01.897 "digest": "sha512", 00:45:01.897 "dhgroup": "ffdhe3072" 00:45:01.897 } 00:45:01.897 } 00:45:01.897 ]' 00:45:01.897 03:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:01.897 03:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:01.897 03:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:01.897 03:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:45:01.897 03:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:01.897 03:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:01.897 03:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:01.897 03:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:02.156 03:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:45:02.723 03:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:02.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:02.723 03:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:02.723 03:41:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:02.723 03:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:02.723 03:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:02.723 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:45:02.723 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:45:02.723 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:45:02.982 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:45:02.982 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:02.982 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:02.982 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:45:02.982 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:45:02.982 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:02.982 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:02.982 03:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:02.982 03:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:02.982 03:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:02.982 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:02.982 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:03.242 00:45:03.242 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:03.242 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:03.242 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:03.242 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:03.242 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:03.242 03:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:03.242 03:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:03.242 03:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:03.242 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:03.242 { 00:45:03.242 "cntlid": 115, 00:45:03.242 "qid": 0, 00:45:03.242 "state": "enabled", 00:45:03.242 "listen_address": { 00:45:03.242 "trtype": "TCP", 00:45:03.242 "adrfam": "IPv4", 00:45:03.242 "traddr": "10.0.0.2", 00:45:03.242 "trsvcid": "4420" 00:45:03.242 }, 00:45:03.242 "peer_address": { 00:45:03.242 "trtype": "TCP", 00:45:03.242 "adrfam": "IPv4", 00:45:03.242 "traddr": "10.0.0.1", 00:45:03.242 "trsvcid": "35394" 00:45:03.242 }, 00:45:03.242 "auth": { 00:45:03.242 "state": "completed", 00:45:03.242 "digest": "sha512", 00:45:03.242 "dhgroup": "ffdhe3072" 00:45:03.242 } 00:45:03.242 } 00:45:03.242 ]' 00:45:03.242 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:03.501 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:03.501 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:03.501 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:45:03.501 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:03.501 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:03.501 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:03.501 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:03.760 03:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:04.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:04.326 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:04.585 00:45:04.585 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:04.585 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:04.585 03:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:04.844 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:04.844 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:04.844 03:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:04.844 03:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:04.844 03:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:04.844 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:04.844 { 00:45:04.844 "cntlid": 117, 00:45:04.844 "qid": 0, 00:45:04.844 "state": "enabled", 00:45:04.844 "listen_address": { 00:45:04.844 "trtype": "TCP", 00:45:04.844 "adrfam": "IPv4", 00:45:04.844 "traddr": "10.0.0.2", 00:45:04.844 "trsvcid": "4420" 00:45:04.844 }, 00:45:04.844 "peer_address": { 00:45:04.844 "trtype": "TCP", 00:45:04.844 "adrfam": "IPv4", 00:45:04.844 "traddr": "10.0.0.1", 00:45:04.844 "trsvcid": "35404" 00:45:04.844 }, 00:45:04.844 "auth": { 00:45:04.844 "state": "completed", 00:45:04.844 "digest": "sha512", 00:45:04.844 "dhgroup": "ffdhe3072" 00:45:04.844 } 00:45:04.844 } 00:45:04.844 ]' 00:45:04.844 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:04.844 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:04.844 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:04.844 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:45:04.844 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:04.844 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:04.844 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:04.844 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:05.103 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:45:05.669 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:05.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:05.669 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:05.669 03:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:05.669 03:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:05.669 03:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:05.669 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:45:05.669 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:45:05.669 03:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:45:05.927 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:45:05.927 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:05.927 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:05.927 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:45:05.927 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:45:05.927 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:05.927 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:45:05.927 03:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:05.927 03:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:05.927 03:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:05.927 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:05.927 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:06.187 00:45:06.187 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:06.187 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:06.187 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:06.187 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:06.187 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:06.187 03:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:06.187 03:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:06.187 03:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:06.187 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:06.187 { 00:45:06.187 "cntlid": 119, 00:45:06.187 "qid": 0, 00:45:06.187 "state": "enabled", 00:45:06.187 "listen_address": { 00:45:06.187 "trtype": "TCP", 00:45:06.187 "adrfam": "IPv4", 00:45:06.187 "traddr": "10.0.0.2", 00:45:06.187 "trsvcid": "4420" 00:45:06.187 }, 00:45:06.187 "peer_address": { 00:45:06.187 "trtype": "TCP", 00:45:06.187 "adrfam": "IPv4", 00:45:06.187 "traddr": "10.0.0.1", 00:45:06.187 "trsvcid": "35432" 00:45:06.187 }, 00:45:06.187 "auth": { 00:45:06.187 "state": "completed", 00:45:06.187 "digest": "sha512", 00:45:06.187 "dhgroup": "ffdhe3072" 00:45:06.187 } 00:45:06.187 } 00:45:06.187 ]' 00:45:06.187 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:06.446 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:06.446 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:06.446 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:45:06.446 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:06.446 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:06.446 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:06.446 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:06.706 03:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:07.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:07.274 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:07.532 00:45:07.532 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:07.532 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:07.532 03:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:07.791 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:07.791 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:07.791 03:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:07.791 03:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:07.791 03:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:07.791 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:07.791 { 00:45:07.791 "cntlid": 121, 00:45:07.791 "qid": 0, 00:45:07.791 "state": "enabled", 00:45:07.791 "listen_address": { 00:45:07.791 "trtype": "TCP", 00:45:07.791 "adrfam": "IPv4", 00:45:07.791 "traddr": "10.0.0.2", 00:45:07.791 "trsvcid": "4420" 00:45:07.791 }, 00:45:07.791 "peer_address": { 00:45:07.791 "trtype": "TCP", 00:45:07.791 "adrfam": "IPv4", 00:45:07.791 "traddr": "10.0.0.1", 00:45:07.791 "trsvcid": "35458" 00:45:07.791 }, 00:45:07.791 "auth": { 00:45:07.791 "state": "completed", 00:45:07.791 "digest": "sha512", 00:45:07.791 "dhgroup": "ffdhe4096" 00:45:07.791 } 00:45:07.791 } 00:45:07.791 ]' 00:45:07.791 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:07.791 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:07.791 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:07.791 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:45:07.791 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:07.791 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:07.791 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:07.791 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:08.050 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:45:08.618 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:08.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:08.618 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:08.618 03:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:08.618 03:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:08.618 03:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:08.618 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:45:08.618 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:45:08.618 03:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:45:08.877 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:45:08.877 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:08.877 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:08.877 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:45:08.877 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:45:08.877 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:08.877 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:08.877 03:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:08.877 03:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:08.877 03:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:08.877 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:08.877 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:09.135 00:45:09.135 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:09.135 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:09.135 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:09.135 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:09.135 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:09.135 03:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:09.135 03:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:09.135 03:41:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:09.135 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:09.135 { 00:45:09.135 "cntlid": 123, 00:45:09.135 "qid": 0, 00:45:09.135 "state": "enabled", 00:45:09.135 "listen_address": { 00:45:09.135 "trtype": "TCP", 00:45:09.135 "adrfam": "IPv4", 00:45:09.135 "traddr": "10.0.0.2", 00:45:09.135 "trsvcid": "4420" 00:45:09.135 }, 00:45:09.135 "peer_address": { 00:45:09.135 "trtype": "TCP", 00:45:09.135 "adrfam": "IPv4", 00:45:09.135 "traddr": "10.0.0.1", 00:45:09.135 "trsvcid": "35486" 00:45:09.135 }, 00:45:09.135 "auth": { 00:45:09.135 "state": "completed", 00:45:09.135 "digest": "sha512", 00:45:09.135 "dhgroup": "ffdhe4096" 00:45:09.135 } 00:45:09.135 } 00:45:09.135 ]' 00:45:09.393 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:09.393 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:09.393 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:09.393 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:45:09.393 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:09.393 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:09.393 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:09.393 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:09.650 03:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:10.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:10.217 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:10.475 00:45:10.475 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:10.475 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:10.475 03:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:10.733 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:10.733 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:10.733 03:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:10.733 03:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:10.733 03:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:10.733 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:10.733 { 00:45:10.733 "cntlid": 125, 00:45:10.733 "qid": 0, 00:45:10.733 "state": "enabled", 00:45:10.733 "listen_address": { 00:45:10.733 "trtype": "TCP", 00:45:10.733 "adrfam": "IPv4", 00:45:10.733 "traddr": "10.0.0.2", 00:45:10.733 "trsvcid": "4420" 00:45:10.733 }, 00:45:10.733 "peer_address": { 00:45:10.733 "trtype": "TCP", 00:45:10.733 "adrfam": "IPv4", 00:45:10.733 "traddr": "10.0.0.1", 00:45:10.733 "trsvcid": "35518" 00:45:10.733 }, 00:45:10.733 "auth": { 00:45:10.734 "state": "completed", 00:45:10.734 "digest": "sha512", 00:45:10.734 "dhgroup": "ffdhe4096" 00:45:10.734 } 00:45:10.734 } 00:45:10.734 ]' 00:45:10.734 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:10.734 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:10.734 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:10.734 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:45:10.734 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:10.734 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:10.734 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:10.734 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:10.990 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:45:11.554 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:11.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:11.554 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:11.554 03:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:11.554 03:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:11.554 03:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:11.554 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:45:11.554 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:45:11.554 03:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:45:11.812 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:45:11.812 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:11.813 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:11.813 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:45:11.813 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:45:11.813 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:11.813 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:45:11.813 03:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:11.813 03:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:11.813 03:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:11.813 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:11.813 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:12.070 00:45:12.070 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:12.070 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:12.070 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:12.329 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:12.329 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:12.329 03:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:12.329 03:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:12.329 03:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:12.329 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:12.329 { 00:45:12.329 "cntlid": 127, 00:45:12.329 "qid": 0, 00:45:12.329 "state": "enabled", 00:45:12.329 "listen_address": { 00:45:12.329 "trtype": "TCP", 00:45:12.329 "adrfam": "IPv4", 00:45:12.329 "traddr": "10.0.0.2", 00:45:12.329 "trsvcid": "4420" 00:45:12.329 }, 00:45:12.329 "peer_address": { 00:45:12.329 "trtype": "TCP", 00:45:12.329 "adrfam": "IPv4", 00:45:12.329 "traddr": "10.0.0.1", 00:45:12.329 "trsvcid": "53060" 00:45:12.329 }, 00:45:12.329 "auth": { 00:45:12.329 "state": "completed", 00:45:12.329 "digest": "sha512", 00:45:12.329 "dhgroup": "ffdhe4096" 00:45:12.329 } 00:45:12.329 } 00:45:12.329 ]' 00:45:12.329 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:12.329 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:12.329 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:12.329 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:45:12.329 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:12.329 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:12.329 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:12.329 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:12.588 03:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:45:13.154 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:13.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:13.155 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:13.438 00:45:13.704 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:13.704 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:13.704 03:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:13.704 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:13.704 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:13.704 03:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:13.704 03:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:13.704 03:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:13.704 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:13.704 { 00:45:13.704 "cntlid": 129, 00:45:13.704 "qid": 0, 00:45:13.704 "state": "enabled", 00:45:13.704 "listen_address": { 00:45:13.704 "trtype": "TCP", 00:45:13.704 "adrfam": "IPv4", 00:45:13.704 "traddr": "10.0.0.2", 00:45:13.704 "trsvcid": "4420" 00:45:13.704 }, 00:45:13.704 "peer_address": { 00:45:13.704 "trtype": "TCP", 00:45:13.704 "adrfam": "IPv4", 00:45:13.704 "traddr": "10.0.0.1", 00:45:13.705 "trsvcid": "53086" 00:45:13.705 }, 00:45:13.705 "auth": { 00:45:13.705 "state": "completed", 00:45:13.705 "digest": "sha512", 00:45:13.705 "dhgroup": "ffdhe6144" 00:45:13.705 } 00:45:13.705 } 00:45:13.705 ]' 00:45:13.705 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:13.705 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:13.705 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:13.963 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:45:13.963 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:13.963 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:13.963 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:13.963 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:13.963 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:45:14.530 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:14.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:14.530 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:14.530 03:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:14.530 03:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:14.530 03:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:14.530 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:45:14.530 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:45:14.530 03:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:45:14.789 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:45:14.789 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:14.789 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:14.789 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:45:14.789 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:45:14.789 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:14.789 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:14.789 03:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:14.789 03:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:14.789 03:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:14.789 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:14.789 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:15.048 00:45:15.048 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:15.048 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:15.048 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:15.306 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:15.306 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:15.306 03:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:15.307 03:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:15.307 03:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:15.307 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:15.307 { 00:45:15.307 "cntlid": 131, 00:45:15.307 "qid": 0, 00:45:15.307 "state": "enabled", 00:45:15.307 "listen_address": { 00:45:15.307 "trtype": "TCP", 00:45:15.307 "adrfam": "IPv4", 00:45:15.307 "traddr": "10.0.0.2", 00:45:15.307 "trsvcid": "4420" 00:45:15.307 }, 00:45:15.307 "peer_address": { 00:45:15.307 "trtype": "TCP", 00:45:15.307 "adrfam": "IPv4", 00:45:15.307 "traddr": "10.0.0.1", 00:45:15.307 "trsvcid": "53126" 00:45:15.307 }, 00:45:15.307 "auth": { 00:45:15.307 "state": "completed", 00:45:15.307 "digest": "sha512", 00:45:15.307 "dhgroup": "ffdhe6144" 00:45:15.307 } 00:45:15.307 } 00:45:15.307 ]' 00:45:15.307 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:15.307 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:15.307 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:15.307 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:45:15.307 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:15.307 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:15.307 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:15.307 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:15.565 03:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:45:16.132 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:16.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:16.132 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:16.132 03:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:16.132 03:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:16.132 03:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:16.132 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:45:16.132 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:45:16.132 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:45:16.391 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:45:16.391 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:16.391 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:16.391 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:45:16.391 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:45:16.391 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:16.391 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:16.391 03:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:16.391 03:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:16.391 03:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:16.391 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:16.391 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:16.650 00:45:16.650 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:16.650 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:16.650 03:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:16.908 03:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:16.908 03:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:16.908 03:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:16.908 03:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:16.908 03:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:16.908 03:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:16.908 { 00:45:16.908 "cntlid": 133, 00:45:16.908 "qid": 0, 00:45:16.908 "state": "enabled", 00:45:16.908 "listen_address": { 00:45:16.908 "trtype": "TCP", 00:45:16.908 "adrfam": "IPv4", 00:45:16.908 "traddr": "10.0.0.2", 00:45:16.908 "trsvcid": "4420" 00:45:16.908 }, 00:45:16.908 "peer_address": { 00:45:16.908 "trtype": "TCP", 00:45:16.908 "adrfam": "IPv4", 00:45:16.908 "traddr": "10.0.0.1", 00:45:16.908 "trsvcid": "53144" 00:45:16.908 }, 00:45:16.908 "auth": { 00:45:16.908 "state": "completed", 00:45:16.908 "digest": "sha512", 00:45:16.908 "dhgroup": "ffdhe6144" 00:45:16.908 } 00:45:16.908 } 00:45:16.908 ]' 00:45:16.908 03:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:16.909 03:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:16.909 03:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:16.909 03:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:45:16.909 03:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:16.909 03:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:16.909 03:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:16.909 03:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:17.167 03:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:45:17.822 03:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:17.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:17.822 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:18.390 00:45:18.390 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:18.390 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:18.390 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:18.390 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:18.390 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:18.390 03:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:18.390 03:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:18.390 03:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:18.390 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:18.390 { 00:45:18.390 "cntlid": 135, 00:45:18.390 "qid": 0, 00:45:18.390 "state": "enabled", 00:45:18.390 "listen_address": { 00:45:18.390 "trtype": "TCP", 00:45:18.390 "adrfam": "IPv4", 00:45:18.390 "traddr": "10.0.0.2", 00:45:18.390 "trsvcid": "4420" 00:45:18.390 }, 00:45:18.390 "peer_address": { 00:45:18.390 "trtype": "TCP", 00:45:18.390 "adrfam": "IPv4", 00:45:18.390 "traddr": "10.0.0.1", 00:45:18.390 "trsvcid": "53178" 00:45:18.390 }, 00:45:18.390 "auth": { 00:45:18.390 "state": "completed", 00:45:18.390 "digest": "sha512", 00:45:18.390 "dhgroup": "ffdhe6144" 00:45:18.390 } 00:45:18.390 } 00:45:18.390 ]' 00:45:18.390 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:18.390 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:18.390 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:18.649 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:45:18.649 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:18.649 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:18.649 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:18.649 03:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:18.649 03:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:45:19.216 03:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:19.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:19.216 03:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:19.216 03:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:19.216 03:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:19.216 03:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:19.216 03:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:45:19.216 03:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:45:19.216 03:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:45:19.216 03:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:45:19.475 03:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:45:19.475 03:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:19.475 03:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:19.475 03:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:45:19.475 03:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:45:19.475 03:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:19.475 03:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:19.475 03:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:19.475 03:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:19.475 03:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:19.475 03:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:19.475 03:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:20.042 00:45:20.042 03:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:20.042 03:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:20.042 03:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:20.042 03:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:20.042 03:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:20.042 03:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:20.042 03:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:20.042 03:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:20.042 03:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:20.042 { 00:45:20.042 "cntlid": 137, 00:45:20.042 "qid": 0, 00:45:20.042 "state": "enabled", 00:45:20.042 "listen_address": { 00:45:20.042 "trtype": "TCP", 00:45:20.042 "adrfam": "IPv4", 00:45:20.042 "traddr": "10.0.0.2", 00:45:20.042 "trsvcid": "4420" 00:45:20.042 }, 00:45:20.042 "peer_address": { 00:45:20.042 "trtype": "TCP", 00:45:20.042 "adrfam": "IPv4", 00:45:20.042 "traddr": "10.0.0.1", 00:45:20.042 "trsvcid": "53202" 00:45:20.042 }, 00:45:20.042 "auth": { 00:45:20.042 "state": "completed", 00:45:20.042 "digest": "sha512", 00:45:20.042 "dhgroup": "ffdhe8192" 00:45:20.042 } 00:45:20.042 } 00:45:20.042 ]' 00:45:20.042 03:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:20.300 03:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:20.300 03:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:20.300 03:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:45:20.300 03:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:20.300 03:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:20.300 03:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:20.301 03:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:20.559 03:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:21.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:21.125 03:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:21.126 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:21.126 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:21.692 00:45:21.692 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:21.692 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:21.692 03:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:21.951 03:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:21.951 03:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:21.951 03:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:21.951 03:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:21.951 03:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:21.951 03:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:21.951 { 00:45:21.951 "cntlid": 139, 00:45:21.951 "qid": 0, 00:45:21.951 "state": "enabled", 00:45:21.951 "listen_address": { 00:45:21.951 "trtype": "TCP", 00:45:21.951 "adrfam": "IPv4", 00:45:21.951 "traddr": "10.0.0.2", 00:45:21.951 "trsvcid": "4420" 00:45:21.951 }, 00:45:21.951 "peer_address": { 00:45:21.951 "trtype": "TCP", 00:45:21.951 "adrfam": "IPv4", 00:45:21.951 "traddr": "10.0.0.1", 00:45:21.951 "trsvcid": "58518" 00:45:21.951 }, 00:45:21.951 "auth": { 00:45:21.951 "state": "completed", 00:45:21.951 "digest": "sha512", 00:45:21.951 "dhgroup": "ffdhe8192" 00:45:21.951 } 00:45:21.951 } 00:45:21.951 ]' 00:45:21.951 03:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:21.951 03:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:21.951 03:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:21.951 03:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:45:21.951 03:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:21.951 03:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:21.951 03:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:21.951 03:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:22.209 03:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZjNkOWM3NTA2OTE5NWIzZmNiNTExMjE2ZGE3YzEzYjRJl5Dv: --dhchap-ctrl-secret DHHC-1:02:N2YwNjEyMTgzZDQ3OTM4M2E5ZWNkZTQ5YzE4ZWM0ODg4OWVkZmIyNmEyMWRiYjFix6quQA==: 00:45:22.776 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:22.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:22.776 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:22.776 03:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:22.776 03:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:22.776 03:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:22.776 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:45:22.776 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:45:22.776 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:45:23.040 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:45:23.040 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:23.040 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:23.040 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:45:23.040 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:45:23.040 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:23.040 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:23.040 03:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:23.040 03:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:23.040 03:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:23.040 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:23.040 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:45:23.298 00:45:23.298 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:23.298 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:23.298 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:23.555 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:23.555 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:23.555 03:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:23.555 03:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:23.556 03:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:23.556 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:23.556 { 00:45:23.556 "cntlid": 141, 00:45:23.556 "qid": 0, 00:45:23.556 "state": "enabled", 00:45:23.556 "listen_address": { 00:45:23.556 "trtype": "TCP", 00:45:23.556 "adrfam": "IPv4", 00:45:23.556 "traddr": "10.0.0.2", 00:45:23.556 "trsvcid": "4420" 00:45:23.556 }, 00:45:23.556 "peer_address": { 00:45:23.556 "trtype": "TCP", 00:45:23.556 "adrfam": "IPv4", 00:45:23.556 "traddr": "10.0.0.1", 00:45:23.556 "trsvcid": "58556" 00:45:23.556 }, 00:45:23.556 "auth": { 00:45:23.556 "state": "completed", 00:45:23.556 "digest": "sha512", 00:45:23.556 "dhgroup": "ffdhe8192" 00:45:23.556 } 00:45:23.556 } 00:45:23.556 ]' 00:45:23.556 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:23.556 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:23.556 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:23.556 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:45:23.556 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:23.556 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:23.556 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:23.556 03:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:23.813 03:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YmI3ZjhiMDJkZDFjNDA0NjUyZjdjMDQ1MWYwZDYwZTU0MWRjZmRjNmI2ZTI2Zjc4WfqhCA==: --dhchap-ctrl-secret DHHC-1:01:ZGIwNmM4ZTI1MWY3ZDgyNzI2Y2RmYzRkNmIxOTMzYWSC1Cyl: 00:45:24.379 03:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:24.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:24.379 03:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:24.379 03:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:24.379 03:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:24.379 03:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:24.379 03:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:45:24.379 03:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:45:24.379 03:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:45:24.637 03:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:45:24.637 03:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:24.637 03:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:24.637 03:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:45:24.637 03:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:45:24.637 03:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:24.637 03:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:45:24.637 03:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:24.637 03:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:24.637 03:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:24.637 03:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:24.637 03:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:25.203 00:45:25.203 03:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:25.203 03:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:25.203 03:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:25.203 03:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:25.203 03:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:25.203 03:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:25.203 03:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:25.203 03:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:25.203 03:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:25.203 { 00:45:25.203 "cntlid": 143, 00:45:25.203 "qid": 0, 00:45:25.203 "state": "enabled", 00:45:25.203 "listen_address": { 00:45:25.203 "trtype": "TCP", 00:45:25.203 "adrfam": "IPv4", 00:45:25.203 "traddr": "10.0.0.2", 00:45:25.203 "trsvcid": "4420" 00:45:25.203 }, 00:45:25.203 "peer_address": { 00:45:25.203 "trtype": "TCP", 00:45:25.203 "adrfam": "IPv4", 00:45:25.203 "traddr": "10.0.0.1", 00:45:25.203 "trsvcid": "58568" 00:45:25.203 }, 00:45:25.203 "auth": { 00:45:25.203 "state": "completed", 00:45:25.203 "digest": "sha512", 00:45:25.203 "dhgroup": "ffdhe8192" 00:45:25.203 } 00:45:25.203 } 00:45:25.203 ]' 00:45:25.203 03:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:25.203 03:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:25.203 03:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:25.203 03:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:45:25.203 03:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:25.461 03:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:25.461 03:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:25.461 03:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:25.461 03:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:45:26.028 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:26.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:26.028 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:26.028 03:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:26.028 03:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:26.028 03:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:26.028 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:45:26.029 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:45:26.029 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:45:26.029 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:45:26.029 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:45:26.029 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:45:26.287 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:45:26.287 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:26.287 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:26.287 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:45:26.287 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:45:26.287 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:26.287 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:26.287 03:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:26.287 03:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:26.287 03:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:26.287 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:26.287 03:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:45:26.854 00:45:26.854 03:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:26.854 03:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:26.854 03:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:26.854 03:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:26.854 03:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:26.854 03:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:26.854 03:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:26.854 03:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:26.854 03:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:26.854 { 00:45:26.854 "cntlid": 145, 00:45:26.854 "qid": 0, 00:45:26.854 "state": "enabled", 00:45:26.854 "listen_address": { 00:45:26.854 "trtype": "TCP", 00:45:26.854 "adrfam": "IPv4", 00:45:26.854 "traddr": "10.0.0.2", 00:45:26.854 "trsvcid": "4420" 00:45:26.854 }, 00:45:26.854 "peer_address": { 00:45:26.854 "trtype": "TCP", 00:45:26.854 "adrfam": "IPv4", 00:45:26.854 "traddr": "10.0.0.1", 00:45:26.854 "trsvcid": "58602" 00:45:26.854 }, 00:45:26.854 "auth": { 00:45:26.854 "state": "completed", 00:45:26.854 "digest": "sha512", 00:45:26.854 "dhgroup": "ffdhe8192" 00:45:26.854 } 00:45:26.854 } 00:45:26.854 ]' 00:45:26.854 03:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:26.854 03:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:26.854 03:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:27.112 03:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:45:27.112 03:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:27.112 03:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:27.112 03:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:27.112 03:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:27.112 03:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:Mjk4MzIwNjM4NTc0Y2NjZjJmZDdiNjc3ZGJiN2ExZmUzMzBmNDAyZTkxZWY0OThmyUHWyA==: --dhchap-ctrl-secret DHHC-1:03:ODU5ZWUyMjkyY2M4OWI2NjQ1NDA2YWM1MWIzZDUyYmEwNjk5YTczNjI0ZWE5N2IxODRiMTRlMWFhOTBlNTIzZT+6p4U=: 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:27.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:45:27.681 03:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:45:28.248 request: 00:45:28.248 { 00:45:28.248 "name": "nvme0", 00:45:28.248 "trtype": "tcp", 00:45:28.248 "traddr": "10.0.0.2", 00:45:28.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:45:28.248 "adrfam": "ipv4", 00:45:28.248 "trsvcid": "4420", 00:45:28.248 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:45:28.248 "dhchap_key": "key2", 00:45:28.248 "method": "bdev_nvme_attach_controller", 00:45:28.248 "req_id": 1 00:45:28.248 } 00:45:28.248 Got JSON-RPC error response 00:45:28.248 response: 00:45:28.248 { 00:45:28.248 "code": -5, 00:45:28.248 "message": "Input/output error" 00:45:28.248 } 00:45:28.248 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:45:28.248 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:45:28.248 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:45:28.248 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:45:28.248 03:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:28.248 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:28.248 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:28.248 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:28.248 03:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:28.248 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:28.248 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:28.248 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:28.248 03:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:45:28.248 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:45:28.249 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:45:28.249 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:45:28.249 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:28.249 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:45:28.249 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:28.249 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:45:28.249 03:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:45:28.507 request: 00:45:28.507 { 00:45:28.507 "name": "nvme0", 00:45:28.507 "trtype": "tcp", 00:45:28.507 "traddr": "10.0.0.2", 00:45:28.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:45:28.507 "adrfam": "ipv4", 00:45:28.507 "trsvcid": "4420", 00:45:28.507 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:45:28.507 "dhchap_key": "key1", 00:45:28.507 "dhchap_ctrlr_key": "ckey2", 00:45:28.507 "method": "bdev_nvme_attach_controller", 00:45:28.507 "req_id": 1 00:45:28.507 } 00:45:28.507 Got JSON-RPC error response 00:45:28.507 response: 00:45:28.507 { 00:45:28.507 "code": -5, 00:45:28.507 "message": "Input/output error" 00:45:28.507 } 00:45:28.507 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:45:28.507 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:45:28.507 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:45:28.507 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:45:28.507 03:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:28.507 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:28.507 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:28.507 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:28.507 03:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 00:45:28.507 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:28.507 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:28.764 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:28.764 03:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:28.764 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:45:28.764 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:28.764 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:45:28.764 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:28.764 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:45:28.764 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:28.764 03:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:28.764 03:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:45:29.022 request: 00:45:29.022 { 00:45:29.022 "name": "nvme0", 00:45:29.022 "trtype": "tcp", 00:45:29.022 "traddr": "10.0.0.2", 00:45:29.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:45:29.022 "adrfam": "ipv4", 00:45:29.022 "trsvcid": "4420", 00:45:29.022 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:45:29.022 "dhchap_key": "key1", 00:45:29.022 "dhchap_ctrlr_key": "ckey1", 00:45:29.022 "method": "bdev_nvme_attach_controller", 00:45:29.022 "req_id": 1 00:45:29.022 } 00:45:29.022 Got JSON-RPC error response 00:45:29.022 response: 00:45:29.022 { 00:45:29.022 "code": -5, 00:45:29.022 "message": "Input/output error" 00:45:29.022 } 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2193738 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 2193738 ']' 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 2193738 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2193738 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2193738' 00:45:29.022 killing process with pid 2193738 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 2193738 00:45:29.022 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 2193738 00:45:29.280 03:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:45:29.280 03:42:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:45:29.280 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:45:29.280 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:29.280 03:42:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2213920 00:45:29.280 03:42:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2213920 00:45:29.280 03:42:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:45:29.280 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2213920 ']' 00:45:29.280 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:29.280 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:45:29.280 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:29.280 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:45:29.280 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:29.538 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:45:29.538 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:45:29.538 03:42:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:45:29.538 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:45:29.538 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:29.538 03:42:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:29.538 03:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:45:29.538 03:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2213920 00:45:29.538 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 2213920 ']' 00:45:29.538 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:29.538 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:45:29.538 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:29.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:29.538 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:45:29.538 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:29.798 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:45:29.798 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:45:29.798 03:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:45:29.798 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:29.798 03:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:29.798 03:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:29.798 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:45:29.798 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:45:29.798 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:45:29.798 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:45:29.798 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:45:29.798 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:45:29.798 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:45:29.798 03:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:29.798 03:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:29.798 03:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:29.798 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:29.798 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:30.366 00:45:30.366 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:45:30.366 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:45:30.366 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:30.366 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:30.366 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:45:30.366 03:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:30.366 03:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:30.366 03:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:30.366 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:45:30.366 { 00:45:30.366 "cntlid": 1, 00:45:30.366 "qid": 0, 00:45:30.366 "state": "enabled", 00:45:30.366 "listen_address": { 00:45:30.366 "trtype": "TCP", 00:45:30.366 "adrfam": "IPv4", 00:45:30.366 "traddr": "10.0.0.2", 00:45:30.366 "trsvcid": "4420" 00:45:30.366 }, 00:45:30.366 "peer_address": { 00:45:30.366 "trtype": "TCP", 00:45:30.366 "adrfam": "IPv4", 00:45:30.366 "traddr": "10.0.0.1", 00:45:30.366 "trsvcid": "58642" 00:45:30.366 }, 00:45:30.366 "auth": { 00:45:30.366 "state": "completed", 00:45:30.366 "digest": "sha512", 00:45:30.366 "dhgroup": "ffdhe8192" 00:45:30.366 } 00:45:30.366 } 00:45:30.366 ]' 00:45:30.366 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:45:30.366 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:45:30.366 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:45:30.625 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:45:30.625 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:45:30.625 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:45:30.625 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:30.625 03:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:30.625 03:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:MDQ4N2U2YjZjNDM4NDMwZTRhNjFlMGE0YWZhNDkxY2Y5MDdjMTk4NTFhZThkYjg2Y2UwZmFkY2RiZDQwMGJiZRHfI6g=: 00:45:31.194 03:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:45:31.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:45:31.194 03:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:31.194 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:31.194 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:31.194 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:31.194 03:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:45:31.194 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:31.194 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:31.194 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:31.194 03:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:45:31.194 03:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:45:31.453 03:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:31.453 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:45:31.453 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:31.453 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:45:31.453 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:31.453 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:45:31.453 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:31.453 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:31.453 03:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:31.712 request: 00:45:31.712 { 00:45:31.712 "name": "nvme0", 00:45:31.712 "trtype": "tcp", 00:45:31.712 "traddr": "10.0.0.2", 00:45:31.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:45:31.712 "adrfam": "ipv4", 00:45:31.712 "trsvcid": "4420", 00:45:31.712 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:45:31.712 "dhchap_key": "key3", 00:45:31.712 "method": "bdev_nvme_attach_controller", 00:45:31.712 "req_id": 1 00:45:31.712 } 00:45:31.712 Got JSON-RPC error response 00:45:31.712 response: 00:45:31.712 { 00:45:31.712 "code": -5, 00:45:31.712 "message": "Input/output error" 00:45:31.712 } 00:45:31.712 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:45:31.712 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:45:31.712 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:45:31.712 03:42:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:45:31.712 03:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:45:31.712 03:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:45:31.712 03:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:45:31.712 03:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:45:31.712 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:31.712 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:45:31.712 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:31.712 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:45:31.712 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:31.712 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:45:31.712 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:31.712 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:31.713 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:45:31.971 request: 00:45:31.971 { 00:45:31.971 "name": "nvme0", 00:45:31.971 "trtype": "tcp", 00:45:31.971 "traddr": "10.0.0.2", 00:45:31.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:45:31.971 "adrfam": "ipv4", 00:45:31.971 "trsvcid": "4420", 00:45:31.971 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:45:31.971 "dhchap_key": "key3", 00:45:31.972 "method": "bdev_nvme_attach_controller", 00:45:31.972 "req_id": 1 00:45:31.972 } 00:45:31.972 Got JSON-RPC error response 00:45:31.972 response: 00:45:31.972 { 00:45:31.972 "code": -5, 00:45:31.972 "message": "Input/output error" 00:45:31.972 } 00:45:31.972 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:45:31.972 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:45:31.972 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:45:31.972 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:45:31.972 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:45:31.972 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:45:31.972 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:45:31.972 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:45:31.972 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:45:31.972 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:45:32.231 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:45:32.490 request: 00:45:32.490 { 00:45:32.490 "name": "nvme0", 00:45:32.490 "trtype": "tcp", 00:45:32.490 "traddr": "10.0.0.2", 00:45:32.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:45:32.490 "adrfam": "ipv4", 00:45:32.490 "trsvcid": "4420", 00:45:32.490 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:45:32.490 "dhchap_key": "key0", 00:45:32.490 "dhchap_ctrlr_key": "key1", 00:45:32.490 "method": "bdev_nvme_attach_controller", 00:45:32.490 "req_id": 1 00:45:32.490 } 00:45:32.490 Got JSON-RPC error response 00:45:32.490 response: 00:45:32.490 { 00:45:32.490 "code": -5, 00:45:32.490 "message": "Input/output error" 00:45:32.490 } 00:45:32.490 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:45:32.490 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:45:32.490 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:45:32.490 03:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:45:32.490 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:45:32.490 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:45:32.490 00:45:32.490 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:45:32.490 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:45:32.490 03:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:45:32.749 03:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:32.749 03:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:45:32.749 03:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:45:33.007 03:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:45:33.007 03:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:45:33.007 03:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2193830 00:45:33.007 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 2193830 ']' 00:45:33.007 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 2193830 00:45:33.007 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:45:33.007 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:45:33.007 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2193830 00:45:33.007 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:45:33.007 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:45:33.007 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2193830' 00:45:33.007 killing process with pid 2193830 00:45:33.007 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 2193830 00:45:33.007 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 2193830 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:45:33.267 rmmod nvme_tcp 00:45:33.267 rmmod nvme_fabrics 00:45:33.267 rmmod nvme_keyring 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2213920 ']' 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2213920 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 2213920 ']' 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 2213920 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:45:33.267 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2213920 00:45:33.526 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:45:33.526 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:45:33.526 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2213920' 00:45:33.526 killing process with pid 2213920 00:45:33.526 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 2213920 00:45:33.526 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 2213920 00:45:33.526 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:45:33.526 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:45:33.526 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:45:33.526 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:33.526 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:45:33.526 03:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:33.526 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:33.526 03:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:36.062 03:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:45:36.062 03:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.bN1 /tmp/spdk.key-sha256.JC3 /tmp/spdk.key-sha384.qMk /tmp/spdk.key-sha512.ZY0 /tmp/spdk.key-sha512.Nyw /tmp/spdk.key-sha384.Qe2 /tmp/spdk.key-sha256.3R8 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:45:36.062 00:45:36.062 real 2m7.473s 00:45:36.062 user 4m51.401s 00:45:36.062 sys 0m20.609s 00:45:36.062 03:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:45:36.062 03:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:45:36.062 ************************************ 00:45:36.062 END TEST nvmf_auth_target 00:45:36.062 ************************************ 00:45:36.062 03:42:16 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:45:36.062 03:42:16 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:45:36.062 03:42:16 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:45:36.062 03:42:16 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:45:36.062 03:42:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:36.062 ************************************ 00:45:36.062 START TEST nvmf_bdevio_no_huge 00:45:36.062 ************************************ 00:45:36.062 03:42:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:45:36.062 * Looking for test storage... 00:45:36.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:36.062 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:36.062 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:45:36.062 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:45:36.063 03:42:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:45:41.337 Found 0000:86:00.0 (0x8086 - 0x159b) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:45:41.337 Found 0000:86:00.1 (0x8086 - 0x159b) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:45:41.337 Found net devices under 0000:86:00.0: cvl_0_0 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:41.337 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:45:41.338 Found net devices under 0000:86:00.1: cvl_0_1 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:45:41.338 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:41.596 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:41.596 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:41.596 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:45:41.596 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:41.596 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:41.596 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:41.596 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:45:41.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:41.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:45:41.596 00:45:41.596 --- 10.0.0.2 ping statistics --- 00:45:41.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:41.596 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:45:41.596 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:41.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:41.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:45:41.596 00:45:41.596 --- 10.0.0.1 ping statistics --- 00:45:41.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:41.597 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@723 -- # xtrace_disable 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2218371 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2218371 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # '[' -z 2218371 ']' 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local max_retries=100 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:41.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # xtrace_disable 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:41.597 03:42:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:45:41.597 [2024-06-11 03:42:22.981463] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:45:41.597 [2024-06-11 03:42:22.981508] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:45:41.856 [2024-06-11 03:42:23.047077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:41.856 [2024-06-11 03:42:23.112729] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:41.856 [2024-06-11 03:42:23.112763] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:41.856 [2024-06-11 03:42:23.112772] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:41.856 [2024-06-11 03:42:23.112779] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:41.856 [2024-06-11 03:42:23.112786] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:41.856 [2024-06-11 03:42:23.112895] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:45:41.856 [2024-06-11 03:42:23.112923] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:45:41.856 [2024-06-11 03:42:23.113042] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:45:41.856 [2024-06-11 03:42:23.113043] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:45:42.422 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:45:42.422 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@863 -- # return 0 00:45:42.422 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:45:42.422 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@729 -- # xtrace_disable 00:45:42.422 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:42.422 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:42.422 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:42.422 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:42.422 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:42.422 [2024-06-11 03:42:23.813184] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:42.422 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:42.422 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:45:42.422 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:42.422 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:42.681 Malloc0 00:45:42.681 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:42.681 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:45:42.681 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:42.681 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:42.681 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:42.681 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:42.681 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:42.681 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:42.681 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:42.681 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:42.681 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:42.681 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:42.681 [2024-06-11 03:42:23.849401] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:42.681 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:42.681 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:45:42.681 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:45:42.682 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:45:42.682 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:45:42.682 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:45:42.682 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:45:42.682 { 00:45:42.682 "params": { 00:45:42.682 "name": "Nvme$subsystem", 00:45:42.682 "trtype": "$TEST_TRANSPORT", 00:45:42.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:42.682 "adrfam": "ipv4", 00:45:42.682 "trsvcid": "$NVMF_PORT", 00:45:42.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:42.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:42.682 "hdgst": ${hdgst:-false}, 00:45:42.682 "ddgst": ${ddgst:-false} 00:45:42.682 }, 00:45:42.682 "method": "bdev_nvme_attach_controller" 00:45:42.682 } 00:45:42.682 EOF 00:45:42.682 )") 00:45:42.682 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:45:42.682 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:45:42.682 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:45:42.682 03:42:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:45:42.682 "params": { 00:45:42.682 "name": "Nvme1", 00:45:42.682 "trtype": "tcp", 00:45:42.682 "traddr": "10.0.0.2", 00:45:42.682 "adrfam": "ipv4", 00:45:42.682 "trsvcid": "4420", 00:45:42.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:42.682 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:42.682 "hdgst": false, 00:45:42.682 "ddgst": false 00:45:42.682 }, 00:45:42.682 "method": "bdev_nvme_attach_controller" 00:45:42.682 }' 00:45:42.682 [2024-06-11 03:42:23.898166] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:45:42.682 [2024-06-11 03:42:23.898213] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2218619 ] 00:45:42.682 [2024-06-11 03:42:23.960323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:45:42.682 [2024-06-11 03:42:24.025491] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:45:42.682 [2024-06-11 03:42:24.025588] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:45:42.682 [2024-06-11 03:42:24.025588] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:45:42.941 I/O targets: 00:45:42.941 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:45:42.941 00:45:42.941 00:45:42.941 CUnit - A unit testing framework for C - Version 2.1-3 00:45:42.941 http://cunit.sourceforge.net/ 00:45:42.941 00:45:42.941 00:45:42.941 Suite: bdevio tests on: Nvme1n1 00:45:42.941 Test: blockdev write read block ...passed 00:45:42.941 Test: blockdev write zeroes read block ...passed 00:45:42.941 Test: blockdev write zeroes read no split ...passed 00:45:43.200 Test: blockdev write zeroes read split ...passed 00:45:43.200 Test: blockdev write zeroes read split partial ...passed 00:45:43.200 Test: blockdev reset ...[2024-06-11 03:42:24.414423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:45:43.200 [2024-06-11 03:42:24.414484] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16ec520 (9): Bad file descriptor 00:45:43.200 [2024-06-11 03:42:24.425562] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:45:43.200 passed 00:45:43.200 Test: blockdev write read 8 blocks ...passed 00:45:43.200 Test: blockdev write read size > 128k ...passed 00:45:43.200 Test: blockdev write read invalid size ...passed 00:45:43.200 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:45:43.200 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:45:43.200 Test: blockdev write read max offset ...passed 00:45:43.200 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:45:43.459 Test: blockdev writev readv 8 blocks ...passed 00:45:43.459 Test: blockdev writev readv 30 x 1block ...passed 00:45:43.459 Test: blockdev writev readv block ...passed 00:45:43.459 Test: blockdev writev readv size > 128k ...passed 00:45:43.459 Test: blockdev writev readv size > 128k in two iovs ...passed 00:45:43.459 Test: blockdev comparev and writev ...[2024-06-11 03:42:24.679293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:43.459 [2024-06-11 03:42:24.679322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:45:43.459 [2024-06-11 03:42:24.679335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:43.459 [2024-06-11 03:42:24.679343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:45:43.459 [2024-06-11 03:42:24.679608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:43.459 [2024-06-11 03:42:24.679618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:45:43.459 [2024-06-11 03:42:24.679629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:43.459 [2024-06-11 03:42:24.679637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:45:43.459 [2024-06-11 03:42:24.679886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:43.459 [2024-06-11 03:42:24.679896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:45:43.459 [2024-06-11 03:42:24.679912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:43.459 [2024-06-11 03:42:24.679919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:45:43.459 [2024-06-11 03:42:24.680177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:43.459 [2024-06-11 03:42:24.680189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:45:43.459 [2024-06-11 03:42:24.680200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:45:43.459 [2024-06-11 03:42:24.680207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:45:43.459 passed 00:45:43.459 Test: blockdev nvme passthru rw ...passed 00:45:43.459 Test: blockdev nvme passthru vendor specific ...[2024-06-11 03:42:24.762474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:43.459 [2024-06-11 03:42:24.762490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:45:43.459 [2024-06-11 03:42:24.762626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:43.459 [2024-06-11 03:42:24.762636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:45:43.459 [2024-06-11 03:42:24.762771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:43.459 [2024-06-11 03:42:24.762781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:45:43.459 [2024-06-11 03:42:24.762910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:45:43.459 [2024-06-11 03:42:24.762920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:45:43.459 passed 00:45:43.459 Test: blockdev nvme admin passthru ...passed 00:45:43.459 Test: blockdev copy ...passed 00:45:43.459 00:45:43.459 Run Summary: Type Total Ran Passed Failed Inactive 00:45:43.459 suites 1 1 n/a 0 0 00:45:43.459 tests 23 23 23 0 0 00:45:43.459 asserts 152 152 152 0 n/a 00:45:43.459 00:45:43.459 Elapsed time = 1.224 seconds 00:45:43.718 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:43.718 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:45:43.719 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:43.719 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:45:43.719 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:45:43.719 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:45:43.719 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:45:43.719 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:45:43.719 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:45:43.719 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:45:43.719 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:45:43.719 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:45:43.719 rmmod nvme_tcp 00:45:43.719 rmmod nvme_fabrics 00:45:43.719 rmmod nvme_keyring 00:45:43.978 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:45:43.978 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:45:43.978 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:45:43.978 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2218371 ']' 00:45:43.978 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2218371 00:45:43.978 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@949 -- # '[' -z 2218371 ']' 00:45:43.978 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # kill -0 2218371 00:45:43.978 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # uname 00:45:43.978 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:45:43.978 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2218371 00:45:43.978 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:45:43.978 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:45:43.978 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2218371' 00:45:43.978 killing process with pid 2218371 00:45:43.978 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # kill 2218371 00:45:43.978 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # wait 2218371 00:45:44.237 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:45:44.237 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:45:44.237 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:45:44.237 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:44.237 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:45:44.237 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:44.237 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:44.237 03:42:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:46.142 03:42:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:45:46.142 00:45:46.142 real 0m10.542s 00:45:46.142 user 0m12.859s 00:45:46.142 sys 0m5.315s 00:45:46.142 03:42:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # xtrace_disable 00:45:46.142 03:42:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:45:46.142 ************************************ 00:45:46.142 END TEST nvmf_bdevio_no_huge 00:45:46.142 ************************************ 00:45:46.401 03:42:27 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:45:46.401 03:42:27 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:45:46.401 03:42:27 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:45:46.401 03:42:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:46.401 ************************************ 00:45:46.401 START TEST nvmf_tls 00:45:46.401 ************************************ 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:45:46.401 * Looking for test storage... 00:45:46.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:45:46.401 03:42:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:45:52.972 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:45:52.973 Found 0000:86:00.0 (0x8086 - 0x159b) 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:45:52.973 Found 0000:86:00.1 (0x8086 - 0x159b) 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:45:52.973 Found net devices under 0000:86:00.0: cvl_0_0 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:45:52.973 Found net devices under 0000:86:00.1: cvl_0_1 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:45:52.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:52.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:45:52.973 00:45:52.973 --- 10.0.0.2 ping statistics --- 00:45:52.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:52.973 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:52.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:52.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:45:52.973 00:45:52.973 --- 10.0.0.1 ping statistics --- 00:45:52.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:52.973 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2222656 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2222656 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2222656 ']' 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:52.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:45:52.973 03:42:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:52.973 [2024-06-11 03:42:33.908449] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:45:52.973 [2024-06-11 03:42:33.908489] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:52.973 EAL: No free 2048 kB hugepages reported on node 1 00:45:52.973 [2024-06-11 03:42:33.971635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:52.973 [2024-06-11 03:42:34.011481] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:52.973 [2024-06-11 03:42:34.011514] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:52.973 [2024-06-11 03:42:34.011525] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:52.973 [2024-06-11 03:42:34.011530] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:52.973 [2024-06-11 03:42:34.011535] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:52.973 [2024-06-11 03:42:34.011569] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:45:52.973 03:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:45:52.973 03:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:45:52.973 03:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:45:52.973 03:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:45:52.973 03:42:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:52.973 03:42:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:52.973 03:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:45:52.973 03:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:45:52.973 true 00:45:52.973 03:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:52.973 03:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:45:53.233 03:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:45:53.233 03:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:45:53.233 03:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:45:53.233 03:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:53.233 03:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:45:53.492 03:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:45:53.492 03:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:45:53.492 03:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:45:53.751 03:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:45:53.751 03:42:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:53.751 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:45:53.751 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:45:53.751 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:45:53.751 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:54.012 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:45:54.012 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:45:54.012 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:45:54.302 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:54.302 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:45:54.302 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:45:54.302 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:45:54.302 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:45:54.595 03:42:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:45:54.854 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:45:54.854 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:45:54.854 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.tHQilQxrDN 00:45:54.854 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:45:54.854 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.HuDVDFgfy5 00:45:54.854 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:45:54.854 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:45:54.854 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.tHQilQxrDN 00:45:54.854 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.HuDVDFgfy5 00:45:54.854 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:45:54.854 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:45:55.113 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.tHQilQxrDN 00:45:55.113 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tHQilQxrDN 00:45:55.113 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:45:55.372 [2024-06-11 03:42:36.597735] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:55.373 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:45:55.373 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:45:55.632 [2024-06-11 03:42:36.918549] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:55.632 [2024-06-11 03:42:36.918733] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:55.632 03:42:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:45:55.890 malloc0 00:45:55.890 03:42:37 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:45:55.890 03:42:37 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tHQilQxrDN 00:45:56.149 [2024-06-11 03:42:37.395694] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:45:56.149 03:42:37 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.tHQilQxrDN 00:45:56.149 EAL: No free 2048 kB hugepages reported on node 1 00:46:06.129 Initializing NVMe Controllers 00:46:06.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:46:06.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:46:06.129 Initialization complete. Launching workers. 00:46:06.129 ======================================================== 00:46:06.129 Latency(us) 00:46:06.129 Device Information : IOPS MiB/s Average min max 00:46:06.129 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17194.19 67.16 3722.56 775.58 7043.11 00:46:06.129 ======================================================== 00:46:06.129 Total : 17194.19 67.16 3722.56 775.58 7043.11 00:46:06.129 00:46:06.129 03:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tHQilQxrDN 00:46:06.129 03:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:46:06.129 03:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:46:06.129 03:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:46:06.129 03:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tHQilQxrDN' 00:46:06.129 03:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:06.129 03:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2224982 00:46:06.129 03:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:06.129 03:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:06.129 03:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2224982 /var/tmp/bdevperf.sock 00:46:06.129 03:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2224982 ']' 00:46:06.129 03:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:06.129 03:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:06.129 03:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:06.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:06.129 03:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:06.129 03:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:06.389 [2024-06-11 03:42:47.545634] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:06.389 [2024-06-11 03:42:47.545685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224982 ] 00:46:06.389 EAL: No free 2048 kB hugepages reported on node 1 00:46:06.389 [2024-06-11 03:42:47.602912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:06.389 [2024-06-11 03:42:47.643988] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:46:06.389 03:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:06.389 03:42:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:06.389 03:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tHQilQxrDN 00:46:06.648 [2024-06-11 03:42:47.860237] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:06.648 [2024-06-11 03:42:47.860323] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:46:06.648 TLSTESTn1 00:46:06.648 03:42:47 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:46:06.648 Running I/O for 10 seconds... 00:46:18.853 00:46:18.853 Latency(us) 00:46:18.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:18.854 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:46:18.854 Verification LBA range: start 0x0 length 0x2000 00:46:18.854 TLSTESTn1 : 10.01 5730.44 22.38 0.00 0.00 22301.81 6491.18 39446.43 00:46:18.854 =================================================================================================================== 00:46:18.854 Total : 5730.44 22.38 0.00 0.00 22301.81 6491.18 39446.43 00:46:18.854 0 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2224982 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2224982 ']' 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2224982 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2224982 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2224982' 00:46:18.854 killing process with pid 2224982 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2224982 00:46:18.854 Received shutdown signal, test time was about 10.000000 seconds 00:46:18.854 00:46:18.854 Latency(us) 00:46:18.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:18.854 =================================================================================================================== 00:46:18.854 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:18.854 [2024-06-11 03:42:58.124661] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2224982 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HuDVDFgfy5 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HuDVDFgfy5 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.HuDVDFgfy5 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.HuDVDFgfy5' 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2226612 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2226612 /var/tmp/bdevperf.sock 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2226612 ']' 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:18.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:18.854 [2024-06-11 03:42:58.346756] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:18.854 [2024-06-11 03:42:58.346808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226612 ] 00:46:18.854 EAL: No free 2048 kB hugepages reported on node 1 00:46:18.854 [2024-06-11 03:42:58.401225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:18.854 [2024-06-11 03:42:58.437408] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.HuDVDFgfy5 00:46:18.854 [2024-06-11 03:42:58.676145] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:18.854 [2024-06-11 03:42:58.676225] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:46:18.854 [2024-06-11 03:42:58.685642] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:18.854 [2024-06-11 03:42:58.686497] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8e210 (107): Transport endpoint is not connected 00:46:18.854 [2024-06-11 03:42:58.687490] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8e210 (9): Bad file descriptor 00:46:18.854 [2024-06-11 03:42:58.688491] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:18.854 [2024-06-11 03:42:58.688502] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:46:18.854 [2024-06-11 03:42:58.688512] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:18.854 request: 00:46:18.854 { 00:46:18.854 "name": "TLSTEST", 00:46:18.854 "trtype": "tcp", 00:46:18.854 "traddr": "10.0.0.2", 00:46:18.854 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:18.854 "adrfam": "ipv4", 00:46:18.854 "trsvcid": "4420", 00:46:18.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:18.854 "psk": "/tmp/tmp.HuDVDFgfy5", 00:46:18.854 "method": "bdev_nvme_attach_controller", 00:46:18.854 "req_id": 1 00:46:18.854 } 00:46:18.854 Got JSON-RPC error response 00:46:18.854 response: 00:46:18.854 { 00:46:18.854 "code": -5, 00:46:18.854 "message": "Input/output error" 00:46:18.854 } 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2226612 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2226612 ']' 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2226612 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2226612 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2226612' 00:46:18.854 killing process with pid 2226612 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2226612 00:46:18.854 Received shutdown signal, test time was about 10.000000 seconds 00:46:18.854 00:46:18.854 Latency(us) 00:46:18.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:18.854 =================================================================================================================== 00:46:18.854 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:18.854 [2024-06-11 03:42:58.755923] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2226612 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tHQilQxrDN 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tHQilQxrDN 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tHQilQxrDN 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tHQilQxrDN' 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2226801 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:18.854 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:18.855 03:42:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2226801 /var/tmp/bdevperf.sock 00:46:18.855 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2226801 ']' 00:46:18.855 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:18.855 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:18.855 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:18.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:18.855 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:18.855 03:42:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:18.855 [2024-06-11 03:42:58.965311] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:18.855 [2024-06-11 03:42:58.965360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226801 ] 00:46:18.855 EAL: No free 2048 kB hugepages reported on node 1 00:46:18.855 [2024-06-11 03:42:59.019537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:18.855 [2024-06-11 03:42:59.057165] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.tHQilQxrDN 00:46:18.855 [2024-06-11 03:42:59.287875] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:18.855 [2024-06-11 03:42:59.287957] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:46:18.855 [2024-06-11 03:42:59.297115] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:46:18.855 [2024-06-11 03:42:59.297135] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:46:18.855 [2024-06-11 03:42:59.297157] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:18.855 [2024-06-11 03:42:59.297251] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x770210 (107): Transport endpoint is not connected 00:46:18.855 [2024-06-11 03:42:59.298214] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x770210 (9): Bad file descriptor 00:46:18.855 [2024-06-11 03:42:59.299214] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:18.855 [2024-06-11 03:42:59.299223] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:46:18.855 [2024-06-11 03:42:59.299232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:18.855 request: 00:46:18.855 { 00:46:18.855 "name": "TLSTEST", 00:46:18.855 "trtype": "tcp", 00:46:18.855 "traddr": "10.0.0.2", 00:46:18.855 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:46:18.855 "adrfam": "ipv4", 00:46:18.855 "trsvcid": "4420", 00:46:18.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:18.855 "psk": "/tmp/tmp.tHQilQxrDN", 00:46:18.855 "method": "bdev_nvme_attach_controller", 00:46:18.855 "req_id": 1 00:46:18.855 } 00:46:18.855 Got JSON-RPC error response 00:46:18.855 response: 00:46:18.855 { 00:46:18.855 "code": -5, 00:46:18.855 "message": "Input/output error" 00:46:18.855 } 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2226801 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2226801 ']' 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2226801 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2226801 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2226801' 00:46:18.855 killing process with pid 2226801 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2226801 00:46:18.855 Received shutdown signal, test time was about 10.000000 seconds 00:46:18.855 00:46:18.855 Latency(us) 00:46:18.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:18.855 =================================================================================================================== 00:46:18.855 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:18.855 [2024-06-11 03:42:59.362809] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2226801 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tHQilQxrDN 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tHQilQxrDN 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tHQilQxrDN 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tHQilQxrDN' 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2226851 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2226851 /var/tmp/bdevperf.sock 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2226851 ']' 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:18.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:18.855 [2024-06-11 03:42:59.571607] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:18.855 [2024-06-11 03:42:59.571656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226851 ] 00:46:18.855 EAL: No free 2048 kB hugepages reported on node 1 00:46:18.855 [2024-06-11 03:42:59.626588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:18.855 [2024-06-11 03:42:59.664725] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:18.855 03:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tHQilQxrDN 00:46:18.855 [2024-06-11 03:42:59.896064] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:18.855 [2024-06-11 03:42:59.896140] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:46:18.855 [2024-06-11 03:42:59.900750] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:46:18.855 [2024-06-11 03:42:59.900769] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:46:18.855 [2024-06-11 03:42:59.900807] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:18.855 [2024-06-11 03:42:59.901452] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2428210 (107): Transport endpoint is not connected 00:46:18.855 [2024-06-11 03:42:59.902443] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2428210 (9): Bad file descriptor 00:46:18.855 [2024-06-11 03:42:59.903445] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:46:18.855 [2024-06-11 03:42:59.903454] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:46:18.855 [2024-06-11 03:42:59.903464] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:46:18.855 request: 00:46:18.855 { 00:46:18.855 "name": "TLSTEST", 00:46:18.855 "trtype": "tcp", 00:46:18.855 "traddr": "10.0.0.2", 00:46:18.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:18.855 "adrfam": "ipv4", 00:46:18.855 "trsvcid": "4420", 00:46:18.855 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:46:18.855 "psk": "/tmp/tmp.tHQilQxrDN", 00:46:18.855 "method": "bdev_nvme_attach_controller", 00:46:18.855 "req_id": 1 00:46:18.855 } 00:46:18.855 Got JSON-RPC error response 00:46:18.855 response: 00:46:18.855 { 00:46:18.856 "code": -5, 00:46:18.856 "message": "Input/output error" 00:46:18.856 } 00:46:18.856 03:42:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2226851 00:46:18.856 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2226851 ']' 00:46:18.856 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2226851 00:46:18.856 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:18.856 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:18.856 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2226851 00:46:18.856 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:46:18.856 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:46:18.856 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2226851' 00:46:18.856 killing process with pid 2226851 00:46:18.856 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2226851 00:46:18.856 Received shutdown signal, test time was about 10.000000 seconds 00:46:18.856 00:46:18.856 Latency(us) 00:46:18.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:18.856 =================================================================================================================== 00:46:18.856 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:18.856 [2024-06-11 03:42:59.967696] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:46:18.856 03:42:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2226851 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2226938 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2226938 /var/tmp/bdevperf.sock 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2226938 ']' 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:18.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:18.856 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:18.856 [2024-06-11 03:43:00.186255] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:18.856 [2024-06-11 03:43:00.186305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226938 ] 00:46:18.856 EAL: No free 2048 kB hugepages reported on node 1 00:46:18.856 [2024-06-11 03:43:00.242586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:19.115 [2024-06-11 03:43:00.280479] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:46:19.115 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:19.115 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:19.115 03:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:46:19.115 [2024-06-11 03:43:00.510281] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:19.115 [2024-06-11 03:43:00.511877] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2324810 (9): Bad file descriptor 00:46:19.115 [2024-06-11 03:43:00.512875] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:46:19.115 [2024-06-11 03:43:00.512884] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:46:19.115 [2024-06-11 03:43:00.512892] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:46:19.115 request: 00:46:19.115 { 00:46:19.115 "name": "TLSTEST", 00:46:19.115 "trtype": "tcp", 00:46:19.115 "traddr": "10.0.0.2", 00:46:19.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:19.115 "adrfam": "ipv4", 00:46:19.115 "trsvcid": "4420", 00:46:19.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:19.115 "method": "bdev_nvme_attach_controller", 00:46:19.115 "req_id": 1 00:46:19.115 } 00:46:19.115 Got JSON-RPC error response 00:46:19.115 response: 00:46:19.115 { 00:46:19.115 "code": -5, 00:46:19.115 "message": "Input/output error" 00:46:19.115 } 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2226938 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2226938 ']' 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2226938 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2226938 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2226938' 00:46:19.374 killing process with pid 2226938 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2226938 00:46:19.374 Received shutdown signal, test time was about 10.000000 seconds 00:46:19.374 00:46:19.374 Latency(us) 00:46:19.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:19.374 =================================================================================================================== 00:46:19.374 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2226938 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2222656 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2222656 ']' 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2222656 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:19.374 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2222656 00:46:19.634 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:46:19.634 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:46:19.634 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2222656' 00:46:19.634 killing process with pid 2222656 00:46:19.634 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2222656 00:46:19.634 [2024-06-11 03:43:00.781133] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:46:19.634 03:43:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2222656 00:46:19.634 03:43:00 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:46:19.634 03:43:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:46:19.634 03:43:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:46:19.634 03:43:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:46:19.634 03:43:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:46:19.634 03:43:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:46:19.634 03:43:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.0KRsigaXmY 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.0KRsigaXmY 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2227108 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2227108 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2227108 ']' 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:19.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:19.634 03:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:19.893 [2024-06-11 03:43:01.063350] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:19.893 [2024-06-11 03:43:01.063398] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:19.893 EAL: No free 2048 kB hugepages reported on node 1 00:46:19.893 [2024-06-11 03:43:01.126476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:19.893 [2024-06-11 03:43:01.165770] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:19.893 [2024-06-11 03:43:01.165810] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:19.893 [2024-06-11 03:43:01.165817] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:19.893 [2024-06-11 03:43:01.165823] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:19.893 [2024-06-11 03:43:01.165828] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:19.893 [2024-06-11 03:43:01.165847] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:46:20.463 03:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:20.463 03:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:20.463 03:43:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:46:20.463 03:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:46:20.463 03:43:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:20.723 03:43:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:20.723 03:43:01 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.0KRsigaXmY 00:46:20.723 03:43:01 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0KRsigaXmY 00:46:20.723 03:43:01 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:46:20.723 [2024-06-11 03:43:02.043864] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:20.723 03:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:46:20.980 03:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:46:20.980 [2024-06-11 03:43:02.372688] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:20.980 [2024-06-11 03:43:02.372864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:21.238 03:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:46:21.238 malloc0 00:46:21.238 03:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0KRsigaXmY 00:46:21.496 [2024-06-11 03:43:02.877825] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0KRsigaXmY 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0KRsigaXmY' 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2227399 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2227399 /var/tmp/bdevperf.sock 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2227399 ']' 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:21.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:21.496 03:43:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:21.754 [2024-06-11 03:43:02.940713] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:21.754 [2024-06-11 03:43:02.940763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227399 ] 00:46:21.754 EAL: No free 2048 kB hugepages reported on node 1 00:46:21.754 [2024-06-11 03:43:02.997107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:21.754 [2024-06-11 03:43:03.036189] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:46:21.754 03:43:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:21.754 03:43:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:21.754 03:43:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0KRsigaXmY 00:46:22.012 [2024-06-11 03:43:03.275006] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:22.012 [2024-06-11 03:43:03.275084] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:46:22.012 TLSTESTn1 00:46:22.012 03:43:03 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:46:22.271 Running I/O for 10 seconds... 00:46:32.287 00:46:32.287 Latency(us) 00:46:32.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:32.287 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:46:32.287 Verification LBA range: start 0x0 length 0x2000 00:46:32.287 TLSTESTn1 : 10.02 3706.89 14.48 0.00 0.00 34475.18 6054.28 66909.14 00:46:32.287 =================================================================================================================== 00:46:32.287 Total : 3706.89 14.48 0.00 0.00 34475.18 6054.28 66909.14 00:46:32.287 0 00:46:32.287 03:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:46:32.287 03:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2227399 00:46:32.287 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2227399 ']' 00:46:32.287 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2227399 00:46:32.287 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:32.287 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:32.287 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2227399 00:46:32.287 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:46:32.287 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:46:32.287 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2227399' 00:46:32.287 killing process with pid 2227399 00:46:32.287 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2227399 00:46:32.287 Received shutdown signal, test time was about 10.000000 seconds 00:46:32.287 00:46:32.287 Latency(us) 00:46:32.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:32.287 =================================================================================================================== 00:46:32.287 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:32.287 [2024-06-11 03:43:13.553367] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:46:32.287 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2227399 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.0KRsigaXmY 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0KRsigaXmY 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0KRsigaXmY 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.0KRsigaXmY 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.0KRsigaXmY' 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2229215 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2229215 /var/tmp/bdevperf.sock 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2229215 ']' 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:32.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:32.547 [2024-06-11 03:43:13.778602] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:32.547 [2024-06-11 03:43:13.778651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229215 ] 00:46:32.547 EAL: No free 2048 kB hugepages reported on node 1 00:46:32.547 [2024-06-11 03:43:13.831883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:32.547 [2024-06-11 03:43:13.867994] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:32.547 03:43:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0KRsigaXmY 00:46:32.806 [2024-06-11 03:43:14.099381] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:32.806 [2024-06-11 03:43:14.099436] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:46:32.806 [2024-06-11 03:43:14.099443] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.0KRsigaXmY 00:46:32.806 request: 00:46:32.806 { 00:46:32.806 "name": "TLSTEST", 00:46:32.806 "trtype": "tcp", 00:46:32.806 "traddr": "10.0.0.2", 00:46:32.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:32.806 "adrfam": "ipv4", 00:46:32.806 "trsvcid": "4420", 00:46:32.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:32.806 "psk": "/tmp/tmp.0KRsigaXmY", 00:46:32.806 "method": "bdev_nvme_attach_controller", 00:46:32.806 "req_id": 1 00:46:32.806 } 00:46:32.806 Got JSON-RPC error response 00:46:32.806 response: 00:46:32.806 { 00:46:32.806 "code": -1, 00:46:32.806 "message": "Operation not permitted" 00:46:32.806 } 00:46:32.806 03:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2229215 00:46:32.806 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2229215 ']' 00:46:32.806 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2229215 00:46:32.806 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:32.806 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:32.806 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2229215 00:46:32.806 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:46:32.806 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:46:32.806 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2229215' 00:46:32.806 killing process with pid 2229215 00:46:32.806 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2229215 00:46:32.806 Received shutdown signal, test time was about 10.000000 seconds 00:46:32.806 00:46:32.806 Latency(us) 00:46:32.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:32.806 =================================================================================================================== 00:46:32.806 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:32.806 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2229215 00:46:33.065 03:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:46:33.065 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:46:33.065 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:33.065 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:33.065 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:33.065 03:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2227108 00:46:33.065 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2227108 ']' 00:46:33.065 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2227108 00:46:33.065 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:33.065 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:33.065 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2227108 00:46:33.065 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:46:33.065 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:46:33.065 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2227108' 00:46:33.065 killing process with pid 2227108 00:46:33.065 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2227108 00:46:33.065 [2024-06-11 03:43:14.373637] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:46:33.065 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2227108 00:46:33.325 03:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:46:33.325 03:43:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:46:33.325 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:46:33.325 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:33.325 03:43:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2229450 00:46:33.325 03:43:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:46:33.325 03:43:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2229450 00:46:33.325 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2229450 ']' 00:46:33.325 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:33.325 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:33.325 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:33.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:33.325 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:33.325 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:33.325 [2024-06-11 03:43:14.612823] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:33.325 [2024-06-11 03:43:14.612875] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:33.325 EAL: No free 2048 kB hugepages reported on node 1 00:46:33.325 [2024-06-11 03:43:14.675280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:33.325 [2024-06-11 03:43:14.711990] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:33.325 [2024-06-11 03:43:14.712035] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:33.325 [2024-06-11 03:43:14.712042] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:33.325 [2024-06-11 03:43:14.712048] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:33.325 [2024-06-11 03:43:14.712053] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:33.325 [2024-06-11 03:43:14.712092] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:46:33.584 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:33.584 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:33.584 03:43:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:46:33.584 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:46:33.584 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:33.584 03:43:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:33.584 03:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.0KRsigaXmY 00:46:33.584 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:46:33.584 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.0KRsigaXmY 00:46:33.584 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:46:33.584 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:33.584 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:46:33.584 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:46:33.584 03:43:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.0KRsigaXmY 00:46:33.584 03:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0KRsigaXmY 00:46:33.584 03:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:46:33.584 [2024-06-11 03:43:14.983235] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:33.844 03:43:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:46:33.844 03:43:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:46:34.103 [2024-06-11 03:43:15.320088] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:34.103 [2024-06-11 03:43:15.320264] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:34.103 03:43:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:46:34.103 malloc0 00:46:34.103 03:43:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:46:34.362 03:43:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0KRsigaXmY 00:46:34.622 [2024-06-11 03:43:15.829446] tcp.c:3581:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:46:34.622 [2024-06-11 03:43:15.829475] tcp.c:3667:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:46:34.622 [2024-06-11 03:43:15.829497] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:46:34.622 request: 00:46:34.622 { 00:46:34.622 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:34.622 "host": "nqn.2016-06.io.spdk:host1", 00:46:34.622 "psk": "/tmp/tmp.0KRsigaXmY", 00:46:34.622 "method": "nvmf_subsystem_add_host", 00:46:34.622 "req_id": 1 00:46:34.622 } 00:46:34.622 Got JSON-RPC error response 00:46:34.622 response: 00:46:34.622 { 00:46:34.622 "code": -32603, 00:46:34.622 "message": "Internal error" 00:46:34.622 } 00:46:34.622 03:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:46:34.622 03:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:46:34.622 03:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:46:34.622 03:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:46:34.622 03:43:15 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2229450 00:46:34.622 03:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2229450 ']' 00:46:34.622 03:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2229450 00:46:34.622 03:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:34.622 03:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:34.622 03:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2229450 00:46:34.622 03:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:46:34.622 03:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:46:34.622 03:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2229450' 00:46:34.622 killing process with pid 2229450 00:46:34.622 03:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2229450 00:46:34.622 03:43:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2229450 00:46:34.881 03:43:16 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.0KRsigaXmY 00:46:34.881 03:43:16 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:46:34.881 03:43:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:46:34.881 03:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:46:34.881 03:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:34.881 03:43:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2229709 00:46:34.881 03:43:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2229709 00:46:34.881 03:43:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:46:34.881 03:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2229709 ']' 00:46:34.881 03:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:34.881 03:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:34.881 03:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:34.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:34.881 03:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:34.881 03:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:34.881 [2024-06-11 03:43:16.133628] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:34.881 [2024-06-11 03:43:16.133676] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:34.881 EAL: No free 2048 kB hugepages reported on node 1 00:46:34.881 [2024-06-11 03:43:16.198218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:34.881 [2024-06-11 03:43:16.237240] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:34.881 [2024-06-11 03:43:16.237276] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:34.881 [2024-06-11 03:43:16.237283] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:34.881 [2024-06-11 03:43:16.237293] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:34.881 [2024-06-11 03:43:16.237298] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:34.882 [2024-06-11 03:43:16.237315] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:46:35.141 03:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:35.141 03:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:35.141 03:43:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:46:35.141 03:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:46:35.141 03:43:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:35.141 03:43:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:35.141 03:43:16 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.0KRsigaXmY 00:46:35.141 03:43:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0KRsigaXmY 00:46:35.141 03:43:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:46:35.141 [2024-06-11 03:43:16.505676] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:35.141 03:43:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:46:35.400 03:43:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:46:35.659 [2024-06-11 03:43:16.826489] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:35.659 [2024-06-11 03:43:16.826679] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:35.659 03:43:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:46:35.659 malloc0 00:46:35.659 03:43:16 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:46:35.918 03:43:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0KRsigaXmY 00:46:36.178 [2024-06-11 03:43:17.335879] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:46:36.178 03:43:17 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2229930 00:46:36.178 03:43:17 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:46:36.178 03:43:17 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:36.178 03:43:17 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2229930 /var/tmp/bdevperf.sock 00:46:36.178 03:43:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2229930 ']' 00:46:36.178 03:43:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:36.178 03:43:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:36.178 03:43:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:36.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:36.178 03:43:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:36.178 03:43:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:36.178 [2024-06-11 03:43:17.396249] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:36.178 [2024-06-11 03:43:17.396296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229930 ] 00:46:36.178 EAL: No free 2048 kB hugepages reported on node 1 00:46:36.178 [2024-06-11 03:43:17.450370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:36.178 [2024-06-11 03:43:17.489291] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:46:36.178 03:43:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:36.178 03:43:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:36.178 03:43:17 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0KRsigaXmY 00:46:36.437 [2024-06-11 03:43:17.716704] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:36.437 [2024-06-11 03:43:17.716780] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:46:36.437 TLSTESTn1 00:46:36.437 03:43:17 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:46:36.696 03:43:18 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:46:36.696 "subsystems": [ 00:46:36.696 { 00:46:36.696 "subsystem": "keyring", 00:46:36.696 "config": [] 00:46:36.696 }, 00:46:36.696 { 00:46:36.696 "subsystem": "iobuf", 00:46:36.696 "config": [ 00:46:36.696 { 00:46:36.696 "method": "iobuf_set_options", 00:46:36.696 "params": { 00:46:36.696 "small_pool_count": 8192, 00:46:36.696 "large_pool_count": 1024, 00:46:36.696 "small_bufsize": 8192, 00:46:36.696 "large_bufsize": 135168 00:46:36.696 } 00:46:36.696 } 00:46:36.696 ] 00:46:36.696 }, 00:46:36.696 { 00:46:36.696 "subsystem": "sock", 00:46:36.696 "config": [ 00:46:36.696 { 00:46:36.696 "method": "sock_set_default_impl", 00:46:36.696 "params": { 00:46:36.696 "impl_name": "posix" 00:46:36.696 } 00:46:36.696 }, 00:46:36.696 { 00:46:36.696 "method": "sock_impl_set_options", 00:46:36.696 "params": { 00:46:36.696 "impl_name": "ssl", 00:46:36.696 "recv_buf_size": 4096, 00:46:36.696 "send_buf_size": 4096, 00:46:36.696 "enable_recv_pipe": true, 00:46:36.696 "enable_quickack": false, 00:46:36.696 "enable_placement_id": 0, 00:46:36.696 "enable_zerocopy_send_server": true, 00:46:36.696 "enable_zerocopy_send_client": false, 00:46:36.696 "zerocopy_threshold": 0, 00:46:36.696 "tls_version": 0, 00:46:36.696 "enable_ktls": false 00:46:36.696 } 00:46:36.696 }, 00:46:36.696 { 00:46:36.696 "method": "sock_impl_set_options", 00:46:36.696 "params": { 00:46:36.696 "impl_name": "posix", 00:46:36.696 "recv_buf_size": 2097152, 00:46:36.696 "send_buf_size": 2097152, 00:46:36.696 "enable_recv_pipe": true, 00:46:36.696 "enable_quickack": false, 00:46:36.696 "enable_placement_id": 0, 00:46:36.696 "enable_zerocopy_send_server": true, 00:46:36.696 "enable_zerocopy_send_client": false, 00:46:36.696 "zerocopy_threshold": 0, 00:46:36.696 "tls_version": 0, 00:46:36.696 "enable_ktls": false 00:46:36.696 } 00:46:36.696 } 00:46:36.696 ] 00:46:36.696 }, 00:46:36.696 { 00:46:36.696 "subsystem": "vmd", 00:46:36.696 "config": [] 00:46:36.696 }, 00:46:36.696 { 00:46:36.696 "subsystem": "accel", 00:46:36.696 "config": [ 00:46:36.696 { 00:46:36.696 "method": "accel_set_options", 00:46:36.696 "params": { 00:46:36.696 "small_cache_size": 128, 00:46:36.696 "large_cache_size": 16, 00:46:36.696 "task_count": 2048, 00:46:36.696 "sequence_count": 2048, 00:46:36.696 "buf_count": 2048 00:46:36.696 } 00:46:36.696 } 00:46:36.696 ] 00:46:36.696 }, 00:46:36.696 { 00:46:36.696 "subsystem": "bdev", 00:46:36.696 "config": [ 00:46:36.696 { 00:46:36.696 "method": "bdev_set_options", 00:46:36.696 "params": { 00:46:36.696 "bdev_io_pool_size": 65535, 00:46:36.697 "bdev_io_cache_size": 256, 00:46:36.697 "bdev_auto_examine": true, 00:46:36.697 "iobuf_small_cache_size": 128, 00:46:36.697 "iobuf_large_cache_size": 16 00:46:36.697 } 00:46:36.697 }, 00:46:36.697 { 00:46:36.697 "method": "bdev_raid_set_options", 00:46:36.697 "params": { 00:46:36.697 "process_window_size_kb": 1024 00:46:36.697 } 00:46:36.697 }, 00:46:36.697 { 00:46:36.697 "method": "bdev_iscsi_set_options", 00:46:36.697 "params": { 00:46:36.697 "timeout_sec": 30 00:46:36.697 } 00:46:36.697 }, 00:46:36.697 { 00:46:36.697 "method": "bdev_nvme_set_options", 00:46:36.697 "params": { 00:46:36.697 "action_on_timeout": "none", 00:46:36.697 "timeout_us": 0, 00:46:36.697 "timeout_admin_us": 0, 00:46:36.697 "keep_alive_timeout_ms": 10000, 00:46:36.697 "arbitration_burst": 0, 00:46:36.697 "low_priority_weight": 0, 00:46:36.697 "medium_priority_weight": 0, 00:46:36.697 "high_priority_weight": 0, 00:46:36.697 "nvme_adminq_poll_period_us": 10000, 00:46:36.697 "nvme_ioq_poll_period_us": 0, 00:46:36.697 "io_queue_requests": 0, 00:46:36.697 "delay_cmd_submit": true, 00:46:36.697 "transport_retry_count": 4, 00:46:36.697 "bdev_retry_count": 3, 00:46:36.697 "transport_ack_timeout": 0, 00:46:36.697 "ctrlr_loss_timeout_sec": 0, 00:46:36.697 "reconnect_delay_sec": 0, 00:46:36.697 "fast_io_fail_timeout_sec": 0, 00:46:36.697 "disable_auto_failback": false, 00:46:36.697 "generate_uuids": false, 00:46:36.697 "transport_tos": 0, 00:46:36.697 "nvme_error_stat": false, 00:46:36.697 "rdma_srq_size": 0, 00:46:36.697 "io_path_stat": false, 00:46:36.697 "allow_accel_sequence": false, 00:46:36.697 "rdma_max_cq_size": 0, 00:46:36.697 "rdma_cm_event_timeout_ms": 0, 00:46:36.697 "dhchap_digests": [ 00:46:36.697 "sha256", 00:46:36.697 "sha384", 00:46:36.697 "sha512" 00:46:36.697 ], 00:46:36.697 "dhchap_dhgroups": [ 00:46:36.697 "null", 00:46:36.697 "ffdhe2048", 00:46:36.697 "ffdhe3072", 00:46:36.697 "ffdhe4096", 00:46:36.697 "ffdhe6144", 00:46:36.697 "ffdhe8192" 00:46:36.697 ] 00:46:36.697 } 00:46:36.697 }, 00:46:36.697 { 00:46:36.697 "method": "bdev_nvme_set_hotplug", 00:46:36.697 "params": { 00:46:36.697 "period_us": 100000, 00:46:36.697 "enable": false 00:46:36.697 } 00:46:36.697 }, 00:46:36.697 { 00:46:36.697 "method": "bdev_malloc_create", 00:46:36.697 "params": { 00:46:36.697 "name": "malloc0", 00:46:36.697 "num_blocks": 8192, 00:46:36.697 "block_size": 4096, 00:46:36.697 "physical_block_size": 4096, 00:46:36.697 "uuid": "5c069a7f-dd1b-4f21-849e-91050937f293", 00:46:36.697 "optimal_io_boundary": 0 00:46:36.697 } 00:46:36.697 }, 00:46:36.697 { 00:46:36.697 "method": "bdev_wait_for_examine" 00:46:36.697 } 00:46:36.697 ] 00:46:36.697 }, 00:46:36.697 { 00:46:36.697 "subsystem": "nbd", 00:46:36.697 "config": [] 00:46:36.697 }, 00:46:36.697 { 00:46:36.697 "subsystem": "scheduler", 00:46:36.697 "config": [ 00:46:36.697 { 00:46:36.697 "method": "framework_set_scheduler", 00:46:36.697 "params": { 00:46:36.697 "name": "static" 00:46:36.697 } 00:46:36.697 } 00:46:36.697 ] 00:46:36.697 }, 00:46:36.697 { 00:46:36.697 "subsystem": "nvmf", 00:46:36.697 "config": [ 00:46:36.697 { 00:46:36.697 "method": "nvmf_set_config", 00:46:36.697 "params": { 00:46:36.697 "discovery_filter": "match_any", 00:46:36.697 "admin_cmd_passthru": { 00:46:36.697 "identify_ctrlr": false 00:46:36.697 } 00:46:36.697 } 00:46:36.697 }, 00:46:36.697 { 00:46:36.697 "method": "nvmf_set_max_subsystems", 00:46:36.697 "params": { 00:46:36.697 "max_subsystems": 1024 00:46:36.697 } 00:46:36.697 }, 00:46:36.697 { 00:46:36.697 "method": "nvmf_set_crdt", 00:46:36.697 "params": { 00:46:36.697 "crdt1": 0, 00:46:36.697 "crdt2": 0, 00:46:36.697 "crdt3": 0 00:46:36.697 } 00:46:36.697 }, 00:46:36.697 { 00:46:36.697 "method": "nvmf_create_transport", 00:46:36.697 "params": { 00:46:36.697 "trtype": "TCP", 00:46:36.697 "max_queue_depth": 128, 00:46:36.697 "max_io_qpairs_per_ctrlr": 127, 00:46:36.697 "in_capsule_data_size": 4096, 00:46:36.697 "max_io_size": 131072, 00:46:36.697 "io_unit_size": 131072, 00:46:36.697 "max_aq_depth": 128, 00:46:36.697 "num_shared_buffers": 511, 00:46:36.697 "buf_cache_size": 4294967295, 00:46:36.697 "dif_insert_or_strip": false, 00:46:36.697 "zcopy": false, 00:46:36.697 "c2h_success": false, 00:46:36.697 "sock_priority": 0, 00:46:36.697 "abort_timeout_sec": 1, 00:46:36.697 "ack_timeout": 0, 00:46:36.697 "data_wr_pool_size": 0 00:46:36.697 } 00:46:36.697 }, 00:46:36.697 { 00:46:36.697 "method": "nvmf_create_subsystem", 00:46:36.697 "params": { 00:46:36.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:36.697 "allow_any_host": false, 00:46:36.697 "serial_number": "SPDK00000000000001", 00:46:36.697 "model_number": "SPDK bdev Controller", 00:46:36.697 "max_namespaces": 10, 00:46:36.697 "min_cntlid": 1, 00:46:36.697 "max_cntlid": 65519, 00:46:36.697 "ana_reporting": false 00:46:36.697 } 00:46:36.697 }, 00:46:36.697 { 00:46:36.697 "method": "nvmf_subsystem_add_host", 00:46:36.697 "params": { 00:46:36.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:36.697 "host": "nqn.2016-06.io.spdk:host1", 00:46:36.697 "psk": "/tmp/tmp.0KRsigaXmY" 00:46:36.697 } 00:46:36.697 }, 00:46:36.697 { 00:46:36.697 "method": "nvmf_subsystem_add_ns", 00:46:36.697 "params": { 00:46:36.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:36.697 "namespace": { 00:46:36.697 "nsid": 1, 00:46:36.697 "bdev_name": "malloc0", 00:46:36.697 "nguid": "5C069A7FDD1B4F21849E91050937F293", 00:46:36.697 "uuid": "5c069a7f-dd1b-4f21-849e-91050937f293", 00:46:36.697 "no_auto_visible": false 00:46:36.697 } 00:46:36.697 } 00:46:36.697 }, 00:46:36.697 { 00:46:36.697 "method": "nvmf_subsystem_add_listener", 00:46:36.697 "params": { 00:46:36.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:36.697 "listen_address": { 00:46:36.697 "trtype": "TCP", 00:46:36.697 "adrfam": "IPv4", 00:46:36.697 "traddr": "10.0.0.2", 00:46:36.697 "trsvcid": "4420" 00:46:36.697 }, 00:46:36.697 "secure_channel": true 00:46:36.697 } 00:46:36.697 } 00:46:36.697 ] 00:46:36.697 } 00:46:36.697 ] 00:46:36.697 }' 00:46:36.697 03:43:18 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:46:36.957 03:43:18 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:46:36.957 "subsystems": [ 00:46:36.957 { 00:46:36.957 "subsystem": "keyring", 00:46:36.957 "config": [] 00:46:36.957 }, 00:46:36.957 { 00:46:36.957 "subsystem": "iobuf", 00:46:36.957 "config": [ 00:46:36.957 { 00:46:36.957 "method": "iobuf_set_options", 00:46:36.957 "params": { 00:46:36.957 "small_pool_count": 8192, 00:46:36.957 "large_pool_count": 1024, 00:46:36.957 "small_bufsize": 8192, 00:46:36.957 "large_bufsize": 135168 00:46:36.957 } 00:46:36.957 } 00:46:36.957 ] 00:46:36.957 }, 00:46:36.957 { 00:46:36.957 "subsystem": "sock", 00:46:36.957 "config": [ 00:46:36.957 { 00:46:36.957 "method": "sock_set_default_impl", 00:46:36.957 "params": { 00:46:36.957 "impl_name": "posix" 00:46:36.957 } 00:46:36.957 }, 00:46:36.957 { 00:46:36.957 "method": "sock_impl_set_options", 00:46:36.957 "params": { 00:46:36.957 "impl_name": "ssl", 00:46:36.957 "recv_buf_size": 4096, 00:46:36.957 "send_buf_size": 4096, 00:46:36.957 "enable_recv_pipe": true, 00:46:36.957 "enable_quickack": false, 00:46:36.957 "enable_placement_id": 0, 00:46:36.957 "enable_zerocopy_send_server": true, 00:46:36.957 "enable_zerocopy_send_client": false, 00:46:36.957 "zerocopy_threshold": 0, 00:46:36.957 "tls_version": 0, 00:46:36.957 "enable_ktls": false 00:46:36.957 } 00:46:36.957 }, 00:46:36.957 { 00:46:36.957 "method": "sock_impl_set_options", 00:46:36.957 "params": { 00:46:36.957 "impl_name": "posix", 00:46:36.957 "recv_buf_size": 2097152, 00:46:36.957 "send_buf_size": 2097152, 00:46:36.957 "enable_recv_pipe": true, 00:46:36.957 "enable_quickack": false, 00:46:36.957 "enable_placement_id": 0, 00:46:36.957 "enable_zerocopy_send_server": true, 00:46:36.957 "enable_zerocopy_send_client": false, 00:46:36.957 "zerocopy_threshold": 0, 00:46:36.957 "tls_version": 0, 00:46:36.957 "enable_ktls": false 00:46:36.957 } 00:46:36.957 } 00:46:36.957 ] 00:46:36.957 }, 00:46:36.957 { 00:46:36.957 "subsystem": "vmd", 00:46:36.957 "config": [] 00:46:36.957 }, 00:46:36.957 { 00:46:36.957 "subsystem": "accel", 00:46:36.957 "config": [ 00:46:36.957 { 00:46:36.957 "method": "accel_set_options", 00:46:36.957 "params": { 00:46:36.957 "small_cache_size": 128, 00:46:36.957 "large_cache_size": 16, 00:46:36.957 "task_count": 2048, 00:46:36.957 "sequence_count": 2048, 00:46:36.957 "buf_count": 2048 00:46:36.957 } 00:46:36.957 } 00:46:36.957 ] 00:46:36.957 }, 00:46:36.957 { 00:46:36.957 "subsystem": "bdev", 00:46:36.957 "config": [ 00:46:36.957 { 00:46:36.957 "method": "bdev_set_options", 00:46:36.957 "params": { 00:46:36.957 "bdev_io_pool_size": 65535, 00:46:36.957 "bdev_io_cache_size": 256, 00:46:36.957 "bdev_auto_examine": true, 00:46:36.957 "iobuf_small_cache_size": 128, 00:46:36.957 "iobuf_large_cache_size": 16 00:46:36.957 } 00:46:36.957 }, 00:46:36.957 { 00:46:36.957 "method": "bdev_raid_set_options", 00:46:36.957 "params": { 00:46:36.957 "process_window_size_kb": 1024 00:46:36.957 } 00:46:36.957 }, 00:46:36.957 { 00:46:36.957 "method": "bdev_iscsi_set_options", 00:46:36.957 "params": { 00:46:36.957 "timeout_sec": 30 00:46:36.957 } 00:46:36.957 }, 00:46:36.957 { 00:46:36.957 "method": "bdev_nvme_set_options", 00:46:36.957 "params": { 00:46:36.957 "action_on_timeout": "none", 00:46:36.957 "timeout_us": 0, 00:46:36.957 "timeout_admin_us": 0, 00:46:36.957 "keep_alive_timeout_ms": 10000, 00:46:36.957 "arbitration_burst": 0, 00:46:36.957 "low_priority_weight": 0, 00:46:36.957 "medium_priority_weight": 0, 00:46:36.957 "high_priority_weight": 0, 00:46:36.957 "nvme_adminq_poll_period_us": 10000, 00:46:36.957 "nvme_ioq_poll_period_us": 0, 00:46:36.957 "io_queue_requests": 512, 00:46:36.957 "delay_cmd_submit": true, 00:46:36.957 "transport_retry_count": 4, 00:46:36.957 "bdev_retry_count": 3, 00:46:36.957 "transport_ack_timeout": 0, 00:46:36.957 "ctrlr_loss_timeout_sec": 0, 00:46:36.957 "reconnect_delay_sec": 0, 00:46:36.957 "fast_io_fail_timeout_sec": 0, 00:46:36.957 "disable_auto_failback": false, 00:46:36.957 "generate_uuids": false, 00:46:36.957 "transport_tos": 0, 00:46:36.957 "nvme_error_stat": false, 00:46:36.957 "rdma_srq_size": 0, 00:46:36.957 "io_path_stat": false, 00:46:36.957 "allow_accel_sequence": false, 00:46:36.957 "rdma_max_cq_size": 0, 00:46:36.957 "rdma_cm_event_timeout_ms": 0, 00:46:36.957 "dhchap_digests": [ 00:46:36.957 "sha256", 00:46:36.957 "sha384", 00:46:36.957 "sha512" 00:46:36.957 ], 00:46:36.957 "dhchap_dhgroups": [ 00:46:36.957 "null", 00:46:36.957 "ffdhe2048", 00:46:36.957 "ffdhe3072", 00:46:36.957 "ffdhe4096", 00:46:36.957 "ffdhe6144", 00:46:36.957 "ffdhe8192" 00:46:36.957 ] 00:46:36.957 } 00:46:36.957 }, 00:46:36.957 { 00:46:36.957 "method": "bdev_nvme_attach_controller", 00:46:36.957 "params": { 00:46:36.957 "name": "TLSTEST", 00:46:36.957 "trtype": "TCP", 00:46:36.957 "adrfam": "IPv4", 00:46:36.957 "traddr": "10.0.0.2", 00:46:36.957 "trsvcid": "4420", 00:46:36.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:36.957 "prchk_reftag": false, 00:46:36.957 "prchk_guard": false, 00:46:36.957 "ctrlr_loss_timeout_sec": 0, 00:46:36.957 "reconnect_delay_sec": 0, 00:46:36.957 "fast_io_fail_timeout_sec": 0, 00:46:36.957 "psk": "/tmp/tmp.0KRsigaXmY", 00:46:36.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:36.957 "hdgst": false, 00:46:36.957 "ddgst": false 00:46:36.958 } 00:46:36.958 }, 00:46:36.958 { 00:46:36.958 "method": "bdev_nvme_set_hotplug", 00:46:36.958 "params": { 00:46:36.958 "period_us": 100000, 00:46:36.958 "enable": false 00:46:36.958 } 00:46:36.958 }, 00:46:36.958 { 00:46:36.958 "method": "bdev_wait_for_examine" 00:46:36.958 } 00:46:36.958 ] 00:46:36.958 }, 00:46:36.958 { 00:46:36.958 "subsystem": "nbd", 00:46:36.958 "config": [] 00:46:36.958 } 00:46:36.958 ] 00:46:36.958 }' 00:46:36.958 03:43:18 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2229930 00:46:36.958 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2229930 ']' 00:46:36.958 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2229930 00:46:36.958 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:36.958 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:36.958 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2229930 00:46:36.958 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:46:36.958 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:46:36.958 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2229930' 00:46:36.958 killing process with pid 2229930 00:46:36.958 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2229930 00:46:36.958 Received shutdown signal, test time was about 10.000000 seconds 00:46:36.958 00:46:36.958 Latency(us) 00:46:36.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:36.958 =================================================================================================================== 00:46:36.958 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:36.958 [2024-06-11 03:43:18.321915] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:46:36.958 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2229930 00:46:37.217 03:43:18 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2229709 00:46:37.217 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2229709 ']' 00:46:37.217 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2229709 00:46:37.217 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:37.217 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:37.217 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2229709 00:46:37.217 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:46:37.217 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:46:37.217 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2229709' 00:46:37.217 killing process with pid 2229709 00:46:37.217 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2229709 00:46:37.217 [2024-06-11 03:43:18.536985] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:46:37.217 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2229709 00:46:37.476 03:43:18 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:46:37.476 03:43:18 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:46:37.476 "subsystems": [ 00:46:37.476 { 00:46:37.476 "subsystem": "keyring", 00:46:37.476 "config": [] 00:46:37.476 }, 00:46:37.476 { 00:46:37.476 "subsystem": "iobuf", 00:46:37.476 "config": [ 00:46:37.476 { 00:46:37.476 "method": "iobuf_set_options", 00:46:37.476 "params": { 00:46:37.476 "small_pool_count": 8192, 00:46:37.476 "large_pool_count": 1024, 00:46:37.476 "small_bufsize": 8192, 00:46:37.476 "large_bufsize": 135168 00:46:37.476 } 00:46:37.476 } 00:46:37.476 ] 00:46:37.476 }, 00:46:37.476 { 00:46:37.476 "subsystem": "sock", 00:46:37.476 "config": [ 00:46:37.476 { 00:46:37.476 "method": "sock_set_default_impl", 00:46:37.476 "params": { 00:46:37.476 "impl_name": "posix" 00:46:37.476 } 00:46:37.476 }, 00:46:37.476 { 00:46:37.476 "method": "sock_impl_set_options", 00:46:37.476 "params": { 00:46:37.476 "impl_name": "ssl", 00:46:37.476 "recv_buf_size": 4096, 00:46:37.476 "send_buf_size": 4096, 00:46:37.476 "enable_recv_pipe": true, 00:46:37.476 "enable_quickack": false, 00:46:37.476 "enable_placement_id": 0, 00:46:37.476 "enable_zerocopy_send_server": true, 00:46:37.476 "enable_zerocopy_send_client": false, 00:46:37.476 "zerocopy_threshold": 0, 00:46:37.476 "tls_version": 0, 00:46:37.476 "enable_ktls": false 00:46:37.476 } 00:46:37.476 }, 00:46:37.476 { 00:46:37.476 "method": "sock_impl_set_options", 00:46:37.476 "params": { 00:46:37.476 "impl_name": "posix", 00:46:37.476 "recv_buf_size": 2097152, 00:46:37.476 "send_buf_size": 2097152, 00:46:37.476 "enable_recv_pipe": true, 00:46:37.476 "enable_quickack": false, 00:46:37.476 "enable_placement_id": 0, 00:46:37.476 "enable_zerocopy_send_server": true, 00:46:37.476 "enable_zerocopy_send_client": false, 00:46:37.476 "zerocopy_threshold": 0, 00:46:37.476 "tls_version": 0, 00:46:37.476 "enable_ktls": false 00:46:37.477 } 00:46:37.477 } 00:46:37.477 ] 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "subsystem": "vmd", 00:46:37.477 "config": [] 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "subsystem": "accel", 00:46:37.477 "config": [ 00:46:37.477 { 00:46:37.477 "method": "accel_set_options", 00:46:37.477 "params": { 00:46:37.477 "small_cache_size": 128, 00:46:37.477 "large_cache_size": 16, 00:46:37.477 "task_count": 2048, 00:46:37.477 "sequence_count": 2048, 00:46:37.477 "buf_count": 2048 00:46:37.477 } 00:46:37.477 } 00:46:37.477 ] 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "subsystem": "bdev", 00:46:37.477 "config": [ 00:46:37.477 { 00:46:37.477 "method": "bdev_set_options", 00:46:37.477 "params": { 00:46:37.477 "bdev_io_pool_size": 65535, 00:46:37.477 "bdev_io_cache_size": 256, 00:46:37.477 "bdev_auto_examine": true, 00:46:37.477 "iobuf_small_cache_size": 128, 00:46:37.477 "iobuf_large_cache_size": 16 00:46:37.477 } 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "method": "bdev_raid_set_options", 00:46:37.477 "params": { 00:46:37.477 "process_window_size_kb": 1024 00:46:37.477 } 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "method": "bdev_iscsi_set_options", 00:46:37.477 "params": { 00:46:37.477 "timeout_sec": 30 00:46:37.477 } 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "method": "bdev_nvme_set_options", 00:46:37.477 "params": { 00:46:37.477 "action_on_timeout": "none", 00:46:37.477 "timeout_us": 0, 00:46:37.477 "timeout_admin_us": 0, 00:46:37.477 "keep_alive_timeout_ms": 10000, 00:46:37.477 "arbitration_burst": 0, 00:46:37.477 "low_priority_weight": 0, 00:46:37.477 "medium_priority_weight": 0, 00:46:37.477 "high_priority_weight": 0, 00:46:37.477 "nvme_adminq_poll_period_us": 10000, 00:46:37.477 "nvme_ioq_poll_period_us": 0, 00:46:37.477 "io_queue_requests": 0, 00:46:37.477 "delay_cmd_submit": true, 00:46:37.477 "transport_retry_count": 4, 00:46:37.477 "bdev_retry_count": 3, 00:46:37.477 "transport_ack_timeout": 0, 00:46:37.477 "ctrlr_loss_timeout_sec": 0, 00:46:37.477 "reconnect_delay_sec": 0, 00:46:37.477 "fast_io_fail_timeout_sec": 0, 00:46:37.477 "disable_auto_failback": false, 00:46:37.477 "generate_uuids": false, 00:46:37.477 "transport_tos": 0, 00:46:37.477 "nvme_error_stat": false, 00:46:37.477 "rdma_srq_size": 0, 00:46:37.477 "io_path_stat": false, 00:46:37.477 "allow_accel_sequence": false, 00:46:37.477 "rdma_max_cq_size": 0, 00:46:37.477 "rdma_cm_event_timeout_ms": 0, 00:46:37.477 "dhchap_digests": [ 00:46:37.477 "sha256", 00:46:37.477 "sha384", 00:46:37.477 "sha512" 00:46:37.477 ], 00:46:37.477 "dhchap_dhgroups": [ 00:46:37.477 "null", 00:46:37.477 "ffdhe2048", 00:46:37.477 "ffdhe3072", 00:46:37.477 "ffdhe4096", 00:46:37.477 "ffdhe6144", 00:46:37.477 "ffdhe8192" 00:46:37.477 ] 00:46:37.477 } 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "method": "bdev_nvme_set_hotplug", 00:46:37.477 "params": { 00:46:37.477 "period_us": 100000, 00:46:37.477 "enable": false 00:46:37.477 } 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "method": "bdev_malloc_create", 00:46:37.477 "params": { 00:46:37.477 "name": "malloc0", 00:46:37.477 "num_blocks": 8192, 00:46:37.477 "block_size": 4096, 00:46:37.477 "physical_block_size": 4096, 00:46:37.477 "uuid": "5c069a7f-dd1b-4f21-849e-91050937f293", 00:46:37.477 "optimal_io_boundary": 0 00:46:37.477 } 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "method": "bdev_wait_for_examine" 00:46:37.477 } 00:46:37.477 ] 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "subsystem": "nbd", 00:46:37.477 "config": [] 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "subsystem": "scheduler", 00:46:37.477 "config": [ 00:46:37.477 { 00:46:37.477 "method": "framework_set_scheduler", 00:46:37.477 "params": { 00:46:37.477 "name": "static" 00:46:37.477 } 00:46:37.477 } 00:46:37.477 ] 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "subsystem": "nvmf", 00:46:37.477 "config": [ 00:46:37.477 { 00:46:37.477 "method": "nvmf_set_config", 00:46:37.477 "params": { 00:46:37.477 "discovery_filter": "match_any", 00:46:37.477 "admin_cmd_passthru": { 00:46:37.477 "identify_ctrlr": false 00:46:37.477 } 00:46:37.477 } 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "method": "nvmf_set_max_subsystems", 00:46:37.477 "params": { 00:46:37.477 "max_subsystems": 1024 00:46:37.477 } 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "method": "nvmf_set_crdt", 00:46:37.477 "params": { 00:46:37.477 "crdt1": 0, 00:46:37.477 "crdt2": 0, 00:46:37.477 "crdt3": 0 00:46:37.477 } 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "method": "nvmf_create_transport", 00:46:37.477 "params": { 00:46:37.477 "trtype": "TCP", 00:46:37.477 "max_queue_depth": 128, 00:46:37.477 "max_io_qpairs_per_ctrlr": 127, 00:46:37.477 "in_capsule_data_size": 4096, 00:46:37.477 "max_io_size": 131072, 00:46:37.477 "io_unit_size": 131072, 00:46:37.477 "max_aq_depth": 128, 00:46:37.477 "num_shared_buffers": 511, 00:46:37.477 "buf_cache_size": 4294967295, 00:46:37.477 "dif_insert_or_strip": false, 00:46:37.477 "zcopy": false, 00:46:37.477 "c2h_success": false, 00:46:37.477 "sock_priority": 0, 00:46:37.477 "abort_timeout_sec": 1, 00:46:37.477 "ack_timeout": 0, 00:46:37.477 "data_wr_pool_size": 0 00:46:37.477 } 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "method": "nvmf_create_subsystem", 00:46:37.477 "params": { 00:46:37.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:37.477 "allow_any_host": false, 00:46:37.477 "serial_number": "SPDK00000000000001", 00:46:37.477 "model_number": "SPDK bdev Controller", 00:46:37.477 "max_namespaces": 10, 00:46:37.477 "min_cntlid": 1, 00:46:37.477 "max_cntlid": 65519, 00:46:37.477 "ana_reporting": false 00:46:37.477 } 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "method": "nvmf_subsystem_add_host", 00:46:37.477 "params": { 00:46:37.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:37.477 "host": "nqn.2016-06.io.spdk:host1", 00:46:37.477 "psk": "/tmp/tmp.0KRsigaXmY" 00:46:37.477 } 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "method": "nvmf_subsystem_add_ns", 00:46:37.477 "params": { 00:46:37.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:37.477 "namespace": { 00:46:37.477 "nsid": 1, 00:46:37.477 "bdev_name": "malloc0", 00:46:37.477 "nguid": "5C069A7FDD1B4F21849E91050937F293", 00:46:37.477 "uuid": "5c069a7f-dd1b-4f21-849e-91050937f293", 00:46:37.477 "no_auto_visible": false 00:46:37.477 } 00:46:37.477 } 00:46:37.477 }, 00:46:37.477 { 00:46:37.477 "method": "nvmf_subsystem_add_listener", 00:46:37.477 "params": { 00:46:37.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:37.477 "listen_address": { 00:46:37.477 "trtype": "TCP", 00:46:37.477 "adrfam": "IPv4", 00:46:37.477 "traddr": "10.0.0.2", 00:46:37.477 "trsvcid": "4420" 00:46:37.477 }, 00:46:37.477 "secure_channel": true 00:46:37.477 } 00:46:37.477 } 00:46:37.477 ] 00:46:37.477 } 00:46:37.477 ] 00:46:37.477 }' 00:46:37.477 03:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:46:37.477 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:46:37.477 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:37.477 03:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:46:37.477 03:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2230161 00:46:37.477 03:43:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2230161 00:46:37.477 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2230161 ']' 00:46:37.478 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:37.478 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:37.478 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:37.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:37.478 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:37.478 03:43:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:37.478 [2024-06-11 03:43:18.758077] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:37.478 [2024-06-11 03:43:18.758120] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:37.478 EAL: No free 2048 kB hugepages reported on node 1 00:46:37.478 [2024-06-11 03:43:18.813887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:37.478 [2024-06-11 03:43:18.852786] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:37.478 [2024-06-11 03:43:18.852823] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:37.478 [2024-06-11 03:43:18.852830] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:37.478 [2024-06-11 03:43:18.852836] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:37.478 [2024-06-11 03:43:18.852840] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:37.478 [2024-06-11 03:43:18.852892] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:46:37.737 [2024-06-11 03:43:19.048726] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:37.737 [2024-06-11 03:43:19.064696] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:46:37.737 [2024-06-11 03:43:19.080747] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:37.737 [2024-06-11 03:43:19.088210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:38.304 03:43:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:38.304 03:43:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:38.304 03:43:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:46:38.304 03:43:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:46:38.304 03:43:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:38.304 03:43:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:38.304 03:43:19 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2230243 00:46:38.304 03:43:19 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2230243 /var/tmp/bdevperf.sock 00:46:38.304 03:43:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2230243 ']' 00:46:38.304 03:43:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:38.304 03:43:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:38.304 03:43:19 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:46:38.304 03:43:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:38.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:38.304 03:43:19 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:46:38.304 "subsystems": [ 00:46:38.304 { 00:46:38.304 "subsystem": "keyring", 00:46:38.304 "config": [] 00:46:38.304 }, 00:46:38.304 { 00:46:38.304 "subsystem": "iobuf", 00:46:38.304 "config": [ 00:46:38.304 { 00:46:38.304 "method": "iobuf_set_options", 00:46:38.304 "params": { 00:46:38.304 "small_pool_count": 8192, 00:46:38.304 "large_pool_count": 1024, 00:46:38.304 "small_bufsize": 8192, 00:46:38.304 "large_bufsize": 135168 00:46:38.304 } 00:46:38.304 } 00:46:38.304 ] 00:46:38.304 }, 00:46:38.304 { 00:46:38.304 "subsystem": "sock", 00:46:38.304 "config": [ 00:46:38.304 { 00:46:38.305 "method": "sock_set_default_impl", 00:46:38.305 "params": { 00:46:38.305 "impl_name": "posix" 00:46:38.305 } 00:46:38.305 }, 00:46:38.305 { 00:46:38.305 "method": "sock_impl_set_options", 00:46:38.305 "params": { 00:46:38.305 "impl_name": "ssl", 00:46:38.305 "recv_buf_size": 4096, 00:46:38.305 "send_buf_size": 4096, 00:46:38.305 "enable_recv_pipe": true, 00:46:38.305 "enable_quickack": false, 00:46:38.305 "enable_placement_id": 0, 00:46:38.305 "enable_zerocopy_send_server": true, 00:46:38.305 "enable_zerocopy_send_client": false, 00:46:38.305 "zerocopy_threshold": 0, 00:46:38.305 "tls_version": 0, 00:46:38.305 "enable_ktls": false 00:46:38.305 } 00:46:38.305 }, 00:46:38.305 { 00:46:38.305 "method": "sock_impl_set_options", 00:46:38.305 "params": { 00:46:38.305 "impl_name": "posix", 00:46:38.305 "recv_buf_size": 2097152, 00:46:38.305 "send_buf_size": 2097152, 00:46:38.305 "enable_recv_pipe": true, 00:46:38.305 "enable_quickack": false, 00:46:38.305 "enable_placement_id": 0, 00:46:38.305 "enable_zerocopy_send_server": true, 00:46:38.305 "enable_zerocopy_send_client": false, 00:46:38.305 "zerocopy_threshold": 0, 00:46:38.305 "tls_version": 0, 00:46:38.305 "enable_ktls": false 00:46:38.305 } 00:46:38.305 } 00:46:38.305 ] 00:46:38.305 }, 00:46:38.305 { 00:46:38.305 "subsystem": "vmd", 00:46:38.305 "config": [] 00:46:38.305 }, 00:46:38.305 { 00:46:38.305 "subsystem": "accel", 00:46:38.305 "config": [ 00:46:38.305 { 00:46:38.305 "method": "accel_set_options", 00:46:38.305 "params": { 00:46:38.305 "small_cache_size": 128, 00:46:38.305 "large_cache_size": 16, 00:46:38.305 "task_count": 2048, 00:46:38.305 "sequence_count": 2048, 00:46:38.305 "buf_count": 2048 00:46:38.305 } 00:46:38.305 } 00:46:38.305 ] 00:46:38.305 }, 00:46:38.305 { 00:46:38.305 "subsystem": "bdev", 00:46:38.305 "config": [ 00:46:38.305 { 00:46:38.305 "method": "bdev_set_options", 00:46:38.305 "params": { 00:46:38.305 "bdev_io_pool_size": 65535, 00:46:38.305 "bdev_io_cache_size": 256, 00:46:38.305 "bdev_auto_examine": true, 00:46:38.305 "iobuf_small_cache_size": 128, 00:46:38.305 "iobuf_large_cache_size": 16 00:46:38.305 } 00:46:38.305 }, 00:46:38.305 { 00:46:38.305 "method": "bdev_raid_set_options", 00:46:38.305 "params": { 00:46:38.305 "process_window_size_kb": 1024 00:46:38.305 } 00:46:38.305 }, 00:46:38.305 { 00:46:38.305 "method": "bdev_iscsi_set_options", 00:46:38.305 "params": { 00:46:38.305 "timeout_sec": 30 00:46:38.305 } 00:46:38.305 }, 00:46:38.305 { 00:46:38.305 "method": "bdev_nvme_set_options", 00:46:38.305 "params": { 00:46:38.305 "action_on_timeout": "none", 00:46:38.305 "timeout_us": 0, 00:46:38.305 "timeout_admin_us": 0, 00:46:38.305 "keep_alive_timeout_ms": 10000, 00:46:38.305 "arbitration_burst": 0, 00:46:38.305 "low_priority_weight": 0, 00:46:38.305 "medium_priority_weight": 0, 00:46:38.305 "high_priority_weight": 0, 00:46:38.305 "nvme_adminq_poll_period_us": 10000, 00:46:38.305 "nvme_ioq_poll_period_us": 0, 00:46:38.305 "io_queue_requests": 512, 00:46:38.305 "delay_cmd_submit": true, 00:46:38.305 "transport_retry_count": 4, 00:46:38.305 "bdev_retry_count": 3, 00:46:38.305 "transport_ack_timeout": 0, 00:46:38.305 "ctrlr_loss_timeout_sec": 0, 00:46:38.305 "reconnect_delay_sec": 0, 00:46:38.305 "fast_io_fail_timeout_sec": 0, 00:46:38.305 "disable_auto_failback": false, 00:46:38.305 "generate_uuids": false, 00:46:38.305 "transport_tos": 0, 00:46:38.305 "nvme_error_stat": false, 00:46:38.305 "rdma_srq_size": 0, 00:46:38.305 "io_path_stat": false, 00:46:38.305 "allow_accel_sequence": false, 00:46:38.305 "rdma_max_cq_size": 0, 00:46:38.305 "rdma_cm_event_timeout_ms": 0, 00:46:38.305 "dhchap_digests": [ 00:46:38.305 "sha256", 00:46:38.305 "sha384", 00:46:38.305 "sha512" 00:46:38.305 ], 00:46:38.305 "dhchap_dhgroups": [ 00:46:38.305 "null", 00:46:38.305 "ffdhe2048", 00:46:38.305 "ffdhe3072", 00:46:38.305 "ffdhe4096", 00:46:38.305 "ffdhe6144", 00:46:38.305 "ffdhe8192" 00:46:38.305 ] 00:46:38.305 } 00:46:38.305 }, 00:46:38.305 { 00:46:38.305 "method": "bdev_nvme_attach_controller", 00:46:38.305 "params": { 00:46:38.305 "name": "TLSTEST", 00:46:38.305 "trtype": "TCP", 00:46:38.305 "adrfam": "IPv4", 00:46:38.305 "traddr": "10.0.0.2", 00:46:38.305 "trsvcid": "4420", 00:46:38.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:38.305 "prchk_reftag": false, 00:46:38.305 "prchk_guard": false, 00:46:38.305 "ctrlr_loss_timeout_sec": 0, 00:46:38.305 "reconnect_delay_sec": 0, 00:46:38.305 "fast_io_fail_timeout_sec": 0, 00:46:38.305 "psk": "/tmp/tmp.0KRsigaXmY", 00:46:38.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:38.305 "hdgst": false, 00:46:38.305 "ddgst": false 00:46:38.305 } 00:46:38.305 }, 00:46:38.305 { 00:46:38.305 "method": "bdev_nvme_set_hotplug", 00:46:38.305 "params": { 00:46:38.305 "period_us": 100000, 00:46:38.305 "enable": false 00:46:38.305 } 00:46:38.305 }, 00:46:38.305 { 00:46:38.305 "method": "bdev_wait_for_examine" 00:46:38.305 } 00:46:38.305 ] 00:46:38.305 }, 00:46:38.305 { 00:46:38.305 "subsystem": "nbd", 00:46:38.305 "config": [] 00:46:38.305 } 00:46:38.305 ] 00:46:38.305 }' 00:46:38.305 03:43:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:38.305 03:43:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:38.305 [2024-06-11 03:43:19.623652] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:38.305 [2024-06-11 03:43:19.623699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230243 ] 00:46:38.305 EAL: No free 2048 kB hugepages reported on node 1 00:46:38.305 [2024-06-11 03:43:19.676996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:38.564 [2024-06-11 03:43:19.716373] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:46:38.564 [2024-06-11 03:43:19.853676] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:38.564 [2024-06-11 03:43:19.853765] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:46:39.131 03:43:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:39.131 03:43:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:39.131 03:43:20 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:46:39.131 Running I/O for 10 seconds... 00:46:51.328 00:46:51.329 Latency(us) 00:46:51.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:51.329 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:46:51.329 Verification LBA range: start 0x0 length 0x2000 00:46:51.329 TLSTESTn1 : 10.01 5596.13 21.86 0.00 0.00 22836.89 4525.10 43940.33 00:46:51.329 =================================================================================================================== 00:46:51.329 Total : 5596.13 21.86 0.00 0.00 22836.89 4525.10 43940.33 00:46:51.329 0 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2230243 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2230243 ']' 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2230243 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2230243 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2230243' 00:46:51.329 killing process with pid 2230243 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2230243 00:46:51.329 Received shutdown signal, test time was about 10.000000 seconds 00:46:51.329 00:46:51.329 Latency(us) 00:46:51.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:51.329 =================================================================================================================== 00:46:51.329 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:51.329 [2024-06-11 03:43:30.603156] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2230243 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2230161 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2230161 ']' 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2230161 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2230161 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2230161' 00:46:51.329 killing process with pid 2230161 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2230161 00:46:51.329 [2024-06-11 03:43:30.818148] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2230161 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:46:51.329 03:43:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2232085 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2232085 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2232085 ']' 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:51.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:51.329 [2024-06-11 03:43:31.046063] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:51.329 [2024-06-11 03:43:31.046111] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:51.329 EAL: No free 2048 kB hugepages reported on node 1 00:46:51.329 [2024-06-11 03:43:31.106958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:51.329 [2024-06-11 03:43:31.146319] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:51.329 [2024-06-11 03:43:31.146357] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:51.329 [2024-06-11 03:43:31.146364] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:51.329 [2024-06-11 03:43:31.146370] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:51.329 [2024-06-11 03:43:31.146375] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:51.329 [2024-06-11 03:43:31.146393] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.0KRsigaXmY 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.0KRsigaXmY 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:46:51.329 [2024-06-11 03:43:31.418091] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:46:51.329 [2024-06-11 03:43:31.778988] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:51.329 [2024-06-11 03:43:31.779157] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:46:51.329 malloc0 00:46:51.329 03:43:31 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:46:51.329 03:43:32 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0KRsigaXmY 00:46:51.329 [2024-06-11 03:43:32.276427] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:46:51.329 03:43:32 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:46:51.329 03:43:32 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2232335 00:46:51.329 03:43:32 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:51.329 03:43:32 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2232335 /var/tmp/bdevperf.sock 00:46:51.329 03:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2232335 ']' 00:46:51.329 03:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:51.329 03:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:51.329 03:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:51.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:51.329 03:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:51.329 03:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:51.329 [2024-06-11 03:43:32.325500] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:51.329 [2024-06-11 03:43:32.325549] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2232335 ] 00:46:51.329 EAL: No free 2048 kB hugepages reported on node 1 00:46:51.329 [2024-06-11 03:43:32.385528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:51.329 [2024-06-11 03:43:32.425128] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:46:51.329 03:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:51.329 03:43:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:51.329 03:43:32 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0KRsigaXmY 00:46:51.329 03:43:32 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:46:51.588 [2024-06-11 03:43:32.820766] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:51.588 nvme0n1 00:46:51.588 03:43:32 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:46:51.846 Running I/O for 1 seconds... 00:46:52.780 00:46:52.780 Latency(us) 00:46:52.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:52.780 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:46:52.780 Verification LBA range: start 0x0 length 0x2000 00:46:52.780 nvme0n1 : 1.02 5298.45 20.70 0.00 0.00 23921.90 5898.24 33454.57 00:46:52.780 =================================================================================================================== 00:46:52.780 Total : 5298.45 20.70 0.00 0.00 23921.90 5898.24 33454.57 00:46:52.780 0 00:46:52.780 03:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2232335 00:46:52.780 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2232335 ']' 00:46:52.780 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2232335 00:46:52.780 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:52.780 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:52.780 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2232335 00:46:52.780 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:46:52.780 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:46:52.781 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2232335' 00:46:52.781 killing process with pid 2232335 00:46:52.781 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2232335 00:46:52.781 Received shutdown signal, test time was about 1.000000 seconds 00:46:52.781 00:46:52.781 Latency(us) 00:46:52.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:52.781 =================================================================================================================== 00:46:52.781 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:52.781 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2232335 00:46:53.039 03:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2232085 00:46:53.039 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2232085 ']' 00:46:53.039 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2232085 00:46:53.039 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:53.039 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:53.039 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2232085 00:46:53.039 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:46:53.039 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:46:53.039 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2232085' 00:46:53.039 killing process with pid 2232085 00:46:53.039 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2232085 00:46:53.039 [2024-06-11 03:43:34.292466] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:46:53.039 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2232085 00:46:53.298 03:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:46:53.298 03:43:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:46:53.298 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:46:53.298 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:53.298 03:43:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2232762 00:46:53.298 03:43:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2232762 00:46:53.298 03:43:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:46:53.298 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2232762 ']' 00:46:53.298 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:53.298 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:53.298 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:53.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:53.298 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:53.298 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:53.298 [2024-06-11 03:43:34.526137] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:53.298 [2024-06-11 03:43:34.526186] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:53.298 EAL: No free 2048 kB hugepages reported on node 1 00:46:53.298 [2024-06-11 03:43:34.589422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:53.298 [2024-06-11 03:43:34.628791] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:53.298 [2024-06-11 03:43:34.628832] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:53.298 [2024-06-11 03:43:34.628839] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:53.298 [2024-06-11 03:43:34.628845] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:53.298 [2024-06-11 03:43:34.628850] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:53.298 [2024-06-11 03:43:34.628872] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:53.557 [2024-06-11 03:43:34.756262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:53.557 malloc0 00:46:53.557 [2024-06-11 03:43:34.784388] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:53.557 [2024-06-11 03:43:34.784561] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2232824 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2232824 /var/tmp/bdevperf.sock 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2232824 ']' 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:53.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:53.557 03:43:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:53.557 [2024-06-11 03:43:34.844416] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:53.557 [2024-06-11 03:43:34.844454] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2232824 ] 00:46:53.557 EAL: No free 2048 kB hugepages reported on node 1 00:46:53.557 [2024-06-11 03:43:34.896860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:53.557 [2024-06-11 03:43:34.937625] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:46:53.815 03:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:53.815 03:43:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:53.816 03:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.0KRsigaXmY 00:46:53.816 03:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:46:54.074 [2024-06-11 03:43:35.313217] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:54.074 nvme0n1 00:46:54.074 03:43:35 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:46:54.074 Running I/O for 1 seconds... 00:46:55.451 00:46:55.451 Latency(us) 00:46:55.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:55.451 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:46:55.451 Verification LBA range: start 0x0 length 0x2000 00:46:55.451 nvme0n1 : 1.03 2690.94 10.51 0.00 0.00 47081.63 7396.21 59918.63 00:46:55.451 =================================================================================================================== 00:46:55.451 Total : 2690.94 10.51 0.00 0.00 47081.63 7396.21 59918.63 00:46:55.451 0 00:46:55.451 03:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:46:55.451 03:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:46:55.451 03:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:55.451 03:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:46:55.451 03:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:46:55.451 "subsystems": [ 00:46:55.451 { 00:46:55.451 "subsystem": "keyring", 00:46:55.451 "config": [ 00:46:55.451 { 00:46:55.451 "method": "keyring_file_add_key", 00:46:55.451 "params": { 00:46:55.451 "name": "key0", 00:46:55.451 "path": "/tmp/tmp.0KRsigaXmY" 00:46:55.451 } 00:46:55.451 } 00:46:55.451 ] 00:46:55.451 }, 00:46:55.451 { 00:46:55.451 "subsystem": "iobuf", 00:46:55.451 "config": [ 00:46:55.451 { 00:46:55.451 "method": "iobuf_set_options", 00:46:55.451 "params": { 00:46:55.451 "small_pool_count": 8192, 00:46:55.451 "large_pool_count": 1024, 00:46:55.451 "small_bufsize": 8192, 00:46:55.451 "large_bufsize": 135168 00:46:55.451 } 00:46:55.451 } 00:46:55.451 ] 00:46:55.451 }, 00:46:55.451 { 00:46:55.451 "subsystem": "sock", 00:46:55.451 "config": [ 00:46:55.451 { 00:46:55.451 "method": "sock_set_default_impl", 00:46:55.451 "params": { 00:46:55.451 "impl_name": "posix" 00:46:55.451 } 00:46:55.451 }, 00:46:55.451 { 00:46:55.451 "method": "sock_impl_set_options", 00:46:55.451 "params": { 00:46:55.451 "impl_name": "ssl", 00:46:55.451 "recv_buf_size": 4096, 00:46:55.451 "send_buf_size": 4096, 00:46:55.451 "enable_recv_pipe": true, 00:46:55.451 "enable_quickack": false, 00:46:55.451 "enable_placement_id": 0, 00:46:55.451 "enable_zerocopy_send_server": true, 00:46:55.451 "enable_zerocopy_send_client": false, 00:46:55.451 "zerocopy_threshold": 0, 00:46:55.451 "tls_version": 0, 00:46:55.451 "enable_ktls": false 00:46:55.451 } 00:46:55.451 }, 00:46:55.451 { 00:46:55.451 "method": "sock_impl_set_options", 00:46:55.451 "params": { 00:46:55.451 "impl_name": "posix", 00:46:55.451 "recv_buf_size": 2097152, 00:46:55.451 "send_buf_size": 2097152, 00:46:55.451 "enable_recv_pipe": true, 00:46:55.451 "enable_quickack": false, 00:46:55.451 "enable_placement_id": 0, 00:46:55.451 "enable_zerocopy_send_server": true, 00:46:55.451 "enable_zerocopy_send_client": false, 00:46:55.451 "zerocopy_threshold": 0, 00:46:55.451 "tls_version": 0, 00:46:55.451 "enable_ktls": false 00:46:55.451 } 00:46:55.451 } 00:46:55.451 ] 00:46:55.451 }, 00:46:55.451 { 00:46:55.451 "subsystem": "vmd", 00:46:55.451 "config": [] 00:46:55.451 }, 00:46:55.451 { 00:46:55.451 "subsystem": "accel", 00:46:55.451 "config": [ 00:46:55.451 { 00:46:55.451 "method": "accel_set_options", 00:46:55.451 "params": { 00:46:55.451 "small_cache_size": 128, 00:46:55.451 "large_cache_size": 16, 00:46:55.451 "task_count": 2048, 00:46:55.451 "sequence_count": 2048, 00:46:55.451 "buf_count": 2048 00:46:55.451 } 00:46:55.451 } 00:46:55.451 ] 00:46:55.451 }, 00:46:55.451 { 00:46:55.451 "subsystem": "bdev", 00:46:55.451 "config": [ 00:46:55.451 { 00:46:55.451 "method": "bdev_set_options", 00:46:55.451 "params": { 00:46:55.451 "bdev_io_pool_size": 65535, 00:46:55.451 "bdev_io_cache_size": 256, 00:46:55.451 "bdev_auto_examine": true, 00:46:55.451 "iobuf_small_cache_size": 128, 00:46:55.451 "iobuf_large_cache_size": 16 00:46:55.451 } 00:46:55.451 }, 00:46:55.451 { 00:46:55.451 "method": "bdev_raid_set_options", 00:46:55.451 "params": { 00:46:55.451 "process_window_size_kb": 1024 00:46:55.451 } 00:46:55.451 }, 00:46:55.451 { 00:46:55.451 "method": "bdev_iscsi_set_options", 00:46:55.451 "params": { 00:46:55.451 "timeout_sec": 30 00:46:55.451 } 00:46:55.451 }, 00:46:55.451 { 00:46:55.451 "method": "bdev_nvme_set_options", 00:46:55.451 "params": { 00:46:55.451 "action_on_timeout": "none", 00:46:55.451 "timeout_us": 0, 00:46:55.451 "timeout_admin_us": 0, 00:46:55.451 "keep_alive_timeout_ms": 10000, 00:46:55.451 "arbitration_burst": 0, 00:46:55.451 "low_priority_weight": 0, 00:46:55.451 "medium_priority_weight": 0, 00:46:55.451 "high_priority_weight": 0, 00:46:55.451 "nvme_adminq_poll_period_us": 10000, 00:46:55.451 "nvme_ioq_poll_period_us": 0, 00:46:55.451 "io_queue_requests": 0, 00:46:55.451 "delay_cmd_submit": true, 00:46:55.451 "transport_retry_count": 4, 00:46:55.451 "bdev_retry_count": 3, 00:46:55.451 "transport_ack_timeout": 0, 00:46:55.451 "ctrlr_loss_timeout_sec": 0, 00:46:55.451 "reconnect_delay_sec": 0, 00:46:55.451 "fast_io_fail_timeout_sec": 0, 00:46:55.451 "disable_auto_failback": false, 00:46:55.451 "generate_uuids": false, 00:46:55.451 "transport_tos": 0, 00:46:55.451 "nvme_error_stat": false, 00:46:55.451 "rdma_srq_size": 0, 00:46:55.451 "io_path_stat": false, 00:46:55.451 "allow_accel_sequence": false, 00:46:55.451 "rdma_max_cq_size": 0, 00:46:55.451 "rdma_cm_event_timeout_ms": 0, 00:46:55.451 "dhchap_digests": [ 00:46:55.451 "sha256", 00:46:55.451 "sha384", 00:46:55.452 "sha512" 00:46:55.452 ], 00:46:55.452 "dhchap_dhgroups": [ 00:46:55.452 "null", 00:46:55.452 "ffdhe2048", 00:46:55.452 "ffdhe3072", 00:46:55.452 "ffdhe4096", 00:46:55.452 "ffdhe6144", 00:46:55.452 "ffdhe8192" 00:46:55.452 ] 00:46:55.452 } 00:46:55.452 }, 00:46:55.452 { 00:46:55.452 "method": "bdev_nvme_set_hotplug", 00:46:55.452 "params": { 00:46:55.452 "period_us": 100000, 00:46:55.452 "enable": false 00:46:55.452 } 00:46:55.452 }, 00:46:55.452 { 00:46:55.452 "method": "bdev_malloc_create", 00:46:55.452 "params": { 00:46:55.452 "name": "malloc0", 00:46:55.452 "num_blocks": 8192, 00:46:55.452 "block_size": 4096, 00:46:55.452 "physical_block_size": 4096, 00:46:55.452 "uuid": "74e3f8a2-5eb5-4564-aaa3-e7edc2cd76dc", 00:46:55.452 "optimal_io_boundary": 0 00:46:55.452 } 00:46:55.452 }, 00:46:55.452 { 00:46:55.452 "method": "bdev_wait_for_examine" 00:46:55.452 } 00:46:55.452 ] 00:46:55.452 }, 00:46:55.452 { 00:46:55.452 "subsystem": "nbd", 00:46:55.452 "config": [] 00:46:55.452 }, 00:46:55.452 { 00:46:55.452 "subsystem": "scheduler", 00:46:55.452 "config": [ 00:46:55.452 { 00:46:55.452 "method": "framework_set_scheduler", 00:46:55.452 "params": { 00:46:55.452 "name": "static" 00:46:55.452 } 00:46:55.452 } 00:46:55.452 ] 00:46:55.452 }, 00:46:55.452 { 00:46:55.452 "subsystem": "nvmf", 00:46:55.452 "config": [ 00:46:55.452 { 00:46:55.452 "method": "nvmf_set_config", 00:46:55.452 "params": { 00:46:55.452 "discovery_filter": "match_any", 00:46:55.452 "admin_cmd_passthru": { 00:46:55.452 "identify_ctrlr": false 00:46:55.452 } 00:46:55.452 } 00:46:55.452 }, 00:46:55.452 { 00:46:55.452 "method": "nvmf_set_max_subsystems", 00:46:55.452 "params": { 00:46:55.452 "max_subsystems": 1024 00:46:55.452 } 00:46:55.452 }, 00:46:55.452 { 00:46:55.452 "method": "nvmf_set_crdt", 00:46:55.452 "params": { 00:46:55.452 "crdt1": 0, 00:46:55.452 "crdt2": 0, 00:46:55.452 "crdt3": 0 00:46:55.452 } 00:46:55.452 }, 00:46:55.452 { 00:46:55.452 "method": "nvmf_create_transport", 00:46:55.452 "params": { 00:46:55.452 "trtype": "TCP", 00:46:55.452 "max_queue_depth": 128, 00:46:55.452 "max_io_qpairs_per_ctrlr": 127, 00:46:55.452 "in_capsule_data_size": 4096, 00:46:55.452 "max_io_size": 131072, 00:46:55.452 "io_unit_size": 131072, 00:46:55.452 "max_aq_depth": 128, 00:46:55.452 "num_shared_buffers": 511, 00:46:55.452 "buf_cache_size": 4294967295, 00:46:55.452 "dif_insert_or_strip": false, 00:46:55.452 "zcopy": false, 00:46:55.452 "c2h_success": false, 00:46:55.452 "sock_priority": 0, 00:46:55.452 "abort_timeout_sec": 1, 00:46:55.452 "ack_timeout": 0, 00:46:55.452 "data_wr_pool_size": 0 00:46:55.452 } 00:46:55.452 }, 00:46:55.452 { 00:46:55.452 "method": "nvmf_create_subsystem", 00:46:55.452 "params": { 00:46:55.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:55.452 "allow_any_host": false, 00:46:55.452 "serial_number": "00000000000000000000", 00:46:55.452 "model_number": "SPDK bdev Controller", 00:46:55.452 "max_namespaces": 32, 00:46:55.452 "min_cntlid": 1, 00:46:55.452 "max_cntlid": 65519, 00:46:55.452 "ana_reporting": false 00:46:55.452 } 00:46:55.452 }, 00:46:55.452 { 00:46:55.452 "method": "nvmf_subsystem_add_host", 00:46:55.452 "params": { 00:46:55.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:55.452 "host": "nqn.2016-06.io.spdk:host1", 00:46:55.452 "psk": "key0" 00:46:55.452 } 00:46:55.452 }, 00:46:55.452 { 00:46:55.452 "method": "nvmf_subsystem_add_ns", 00:46:55.452 "params": { 00:46:55.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:55.452 "namespace": { 00:46:55.452 "nsid": 1, 00:46:55.452 "bdev_name": "malloc0", 00:46:55.452 "nguid": "74E3F8A25EB54564AAA3E7EDC2CD76DC", 00:46:55.452 "uuid": "74e3f8a2-5eb5-4564-aaa3-e7edc2cd76dc", 00:46:55.452 "no_auto_visible": false 00:46:55.452 } 00:46:55.452 } 00:46:55.452 }, 00:46:55.452 { 00:46:55.452 "method": "nvmf_subsystem_add_listener", 00:46:55.452 "params": { 00:46:55.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:55.452 "listen_address": { 00:46:55.452 "trtype": "TCP", 00:46:55.452 "adrfam": "IPv4", 00:46:55.452 "traddr": "10.0.0.2", 00:46:55.452 "trsvcid": "4420" 00:46:55.452 }, 00:46:55.452 "secure_channel": true 00:46:55.452 } 00:46:55.452 } 00:46:55.452 ] 00:46:55.452 } 00:46:55.452 ] 00:46:55.452 }' 00:46:55.452 03:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:46:55.711 03:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:46:55.711 "subsystems": [ 00:46:55.711 { 00:46:55.711 "subsystem": "keyring", 00:46:55.711 "config": [ 00:46:55.711 { 00:46:55.711 "method": "keyring_file_add_key", 00:46:55.711 "params": { 00:46:55.711 "name": "key0", 00:46:55.711 "path": "/tmp/tmp.0KRsigaXmY" 00:46:55.711 } 00:46:55.711 } 00:46:55.711 ] 00:46:55.711 }, 00:46:55.711 { 00:46:55.711 "subsystem": "iobuf", 00:46:55.711 "config": [ 00:46:55.711 { 00:46:55.711 "method": "iobuf_set_options", 00:46:55.711 "params": { 00:46:55.711 "small_pool_count": 8192, 00:46:55.711 "large_pool_count": 1024, 00:46:55.711 "small_bufsize": 8192, 00:46:55.711 "large_bufsize": 135168 00:46:55.711 } 00:46:55.711 } 00:46:55.711 ] 00:46:55.711 }, 00:46:55.711 { 00:46:55.711 "subsystem": "sock", 00:46:55.711 "config": [ 00:46:55.711 { 00:46:55.711 "method": "sock_set_default_impl", 00:46:55.711 "params": { 00:46:55.711 "impl_name": "posix" 00:46:55.711 } 00:46:55.711 }, 00:46:55.711 { 00:46:55.711 "method": "sock_impl_set_options", 00:46:55.711 "params": { 00:46:55.711 "impl_name": "ssl", 00:46:55.711 "recv_buf_size": 4096, 00:46:55.711 "send_buf_size": 4096, 00:46:55.711 "enable_recv_pipe": true, 00:46:55.711 "enable_quickack": false, 00:46:55.711 "enable_placement_id": 0, 00:46:55.711 "enable_zerocopy_send_server": true, 00:46:55.711 "enable_zerocopy_send_client": false, 00:46:55.711 "zerocopy_threshold": 0, 00:46:55.711 "tls_version": 0, 00:46:55.711 "enable_ktls": false 00:46:55.711 } 00:46:55.711 }, 00:46:55.711 { 00:46:55.711 "method": "sock_impl_set_options", 00:46:55.711 "params": { 00:46:55.711 "impl_name": "posix", 00:46:55.711 "recv_buf_size": 2097152, 00:46:55.711 "send_buf_size": 2097152, 00:46:55.711 "enable_recv_pipe": true, 00:46:55.711 "enable_quickack": false, 00:46:55.711 "enable_placement_id": 0, 00:46:55.711 "enable_zerocopy_send_server": true, 00:46:55.711 "enable_zerocopy_send_client": false, 00:46:55.711 "zerocopy_threshold": 0, 00:46:55.711 "tls_version": 0, 00:46:55.711 "enable_ktls": false 00:46:55.711 } 00:46:55.711 } 00:46:55.711 ] 00:46:55.711 }, 00:46:55.711 { 00:46:55.711 "subsystem": "vmd", 00:46:55.711 "config": [] 00:46:55.711 }, 00:46:55.711 { 00:46:55.711 "subsystem": "accel", 00:46:55.711 "config": [ 00:46:55.711 { 00:46:55.711 "method": "accel_set_options", 00:46:55.711 "params": { 00:46:55.711 "small_cache_size": 128, 00:46:55.711 "large_cache_size": 16, 00:46:55.711 "task_count": 2048, 00:46:55.711 "sequence_count": 2048, 00:46:55.711 "buf_count": 2048 00:46:55.711 } 00:46:55.711 } 00:46:55.711 ] 00:46:55.711 }, 00:46:55.711 { 00:46:55.711 "subsystem": "bdev", 00:46:55.711 "config": [ 00:46:55.711 { 00:46:55.711 "method": "bdev_set_options", 00:46:55.711 "params": { 00:46:55.711 "bdev_io_pool_size": 65535, 00:46:55.711 "bdev_io_cache_size": 256, 00:46:55.711 "bdev_auto_examine": true, 00:46:55.712 "iobuf_small_cache_size": 128, 00:46:55.712 "iobuf_large_cache_size": 16 00:46:55.712 } 00:46:55.712 }, 00:46:55.712 { 00:46:55.712 "method": "bdev_raid_set_options", 00:46:55.712 "params": { 00:46:55.712 "process_window_size_kb": 1024 00:46:55.712 } 00:46:55.712 }, 00:46:55.712 { 00:46:55.712 "method": "bdev_iscsi_set_options", 00:46:55.712 "params": { 00:46:55.712 "timeout_sec": 30 00:46:55.712 } 00:46:55.712 }, 00:46:55.712 { 00:46:55.712 "method": "bdev_nvme_set_options", 00:46:55.712 "params": { 00:46:55.712 "action_on_timeout": "none", 00:46:55.712 "timeout_us": 0, 00:46:55.712 "timeout_admin_us": 0, 00:46:55.712 "keep_alive_timeout_ms": 10000, 00:46:55.712 "arbitration_burst": 0, 00:46:55.712 "low_priority_weight": 0, 00:46:55.712 "medium_priority_weight": 0, 00:46:55.712 "high_priority_weight": 0, 00:46:55.712 "nvme_adminq_poll_period_us": 10000, 00:46:55.712 "nvme_ioq_poll_period_us": 0, 00:46:55.712 "io_queue_requests": 512, 00:46:55.712 "delay_cmd_submit": true, 00:46:55.712 "transport_retry_count": 4, 00:46:55.712 "bdev_retry_count": 3, 00:46:55.712 "transport_ack_timeout": 0, 00:46:55.712 "ctrlr_loss_timeout_sec": 0, 00:46:55.712 "reconnect_delay_sec": 0, 00:46:55.712 "fast_io_fail_timeout_sec": 0, 00:46:55.712 "disable_auto_failback": false, 00:46:55.712 "generate_uuids": false, 00:46:55.712 "transport_tos": 0, 00:46:55.712 "nvme_error_stat": false, 00:46:55.712 "rdma_srq_size": 0, 00:46:55.712 "io_path_stat": false, 00:46:55.712 "allow_accel_sequence": false, 00:46:55.712 "rdma_max_cq_size": 0, 00:46:55.712 "rdma_cm_event_timeout_ms": 0, 00:46:55.712 "dhchap_digests": [ 00:46:55.712 "sha256", 00:46:55.712 "sha384", 00:46:55.712 "sha512" 00:46:55.712 ], 00:46:55.712 "dhchap_dhgroups": [ 00:46:55.712 "null", 00:46:55.712 "ffdhe2048", 00:46:55.712 "ffdhe3072", 00:46:55.712 "ffdhe4096", 00:46:55.712 "ffdhe6144", 00:46:55.712 "ffdhe8192" 00:46:55.712 ] 00:46:55.712 } 00:46:55.712 }, 00:46:55.712 { 00:46:55.712 "method": "bdev_nvme_attach_controller", 00:46:55.712 "params": { 00:46:55.712 "name": "nvme0", 00:46:55.712 "trtype": "TCP", 00:46:55.712 "adrfam": "IPv4", 00:46:55.712 "traddr": "10.0.0.2", 00:46:55.712 "trsvcid": "4420", 00:46:55.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:55.712 "prchk_reftag": false, 00:46:55.712 "prchk_guard": false, 00:46:55.712 "ctrlr_loss_timeout_sec": 0, 00:46:55.712 "reconnect_delay_sec": 0, 00:46:55.712 "fast_io_fail_timeout_sec": 0, 00:46:55.712 "psk": "key0", 00:46:55.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:55.712 "hdgst": false, 00:46:55.712 "ddgst": false 00:46:55.712 } 00:46:55.712 }, 00:46:55.712 { 00:46:55.712 "method": "bdev_nvme_set_hotplug", 00:46:55.712 "params": { 00:46:55.712 "period_us": 100000, 00:46:55.712 "enable": false 00:46:55.712 } 00:46:55.712 }, 00:46:55.712 { 00:46:55.712 "method": "bdev_enable_histogram", 00:46:55.712 "params": { 00:46:55.712 "name": "nvme0n1", 00:46:55.712 "enable": true 00:46:55.712 } 00:46:55.712 }, 00:46:55.712 { 00:46:55.712 "method": "bdev_wait_for_examine" 00:46:55.712 } 00:46:55.712 ] 00:46:55.712 }, 00:46:55.712 { 00:46:55.712 "subsystem": "nbd", 00:46:55.712 "config": [] 00:46:55.712 } 00:46:55.712 ] 00:46:55.712 }' 00:46:55.712 03:43:36 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2232824 00:46:55.712 03:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2232824 ']' 00:46:55.712 03:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2232824 00:46:55.712 03:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:55.712 03:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:55.712 03:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2232824 00:46:55.712 03:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:46:55.712 03:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:46:55.712 03:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2232824' 00:46:55.712 killing process with pid 2232824 00:46:55.712 03:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2232824 00:46:55.712 Received shutdown signal, test time was about 1.000000 seconds 00:46:55.712 00:46:55.712 Latency(us) 00:46:55.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:55.712 =================================================================================================================== 00:46:55.712 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:55.712 03:43:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2232824 00:46:55.712 03:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2232762 00:46:55.712 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2232762 ']' 00:46:55.712 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2232762 00:46:55.712 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:55.712 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:55.712 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2232762 00:46:55.971 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:46:55.971 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:46:55.971 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2232762' 00:46:55.971 killing process with pid 2232762 00:46:55.971 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2232762 00:46:55.971 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2232762 00:46:55.971 03:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:46:55.971 03:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:46:55.971 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@723 -- # xtrace_disable 00:46:55.971 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:55.971 03:43:37 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:46:55.971 "subsystems": [ 00:46:55.971 { 00:46:55.971 "subsystem": "keyring", 00:46:55.971 "config": [ 00:46:55.971 { 00:46:55.971 "method": "keyring_file_add_key", 00:46:55.971 "params": { 00:46:55.971 "name": "key0", 00:46:55.971 "path": "/tmp/tmp.0KRsigaXmY" 00:46:55.971 } 00:46:55.971 } 00:46:55.971 ] 00:46:55.971 }, 00:46:55.971 { 00:46:55.971 "subsystem": "iobuf", 00:46:55.971 "config": [ 00:46:55.971 { 00:46:55.971 "method": "iobuf_set_options", 00:46:55.971 "params": { 00:46:55.972 "small_pool_count": 8192, 00:46:55.972 "large_pool_count": 1024, 00:46:55.972 "small_bufsize": 8192, 00:46:55.972 "large_bufsize": 135168 00:46:55.972 } 00:46:55.972 } 00:46:55.972 ] 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "subsystem": "sock", 00:46:55.972 "config": [ 00:46:55.972 { 00:46:55.972 "method": "sock_set_default_impl", 00:46:55.972 "params": { 00:46:55.972 "impl_name": "posix" 00:46:55.972 } 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "method": "sock_impl_set_options", 00:46:55.972 "params": { 00:46:55.972 "impl_name": "ssl", 00:46:55.972 "recv_buf_size": 4096, 00:46:55.972 "send_buf_size": 4096, 00:46:55.972 "enable_recv_pipe": true, 00:46:55.972 "enable_quickack": false, 00:46:55.972 "enable_placement_id": 0, 00:46:55.972 "enable_zerocopy_send_server": true, 00:46:55.972 "enable_zerocopy_send_client": false, 00:46:55.972 "zerocopy_threshold": 0, 00:46:55.972 "tls_version": 0, 00:46:55.972 "enable_ktls": false 00:46:55.972 } 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "method": "sock_impl_set_options", 00:46:55.972 "params": { 00:46:55.972 "impl_name": "posix", 00:46:55.972 "recv_buf_size": 2097152, 00:46:55.972 "send_buf_size": 2097152, 00:46:55.972 "enable_recv_pipe": true, 00:46:55.972 "enable_quickack": false, 00:46:55.972 "enable_placement_id": 0, 00:46:55.972 "enable_zerocopy_send_server": true, 00:46:55.972 "enable_zerocopy_send_client": false, 00:46:55.972 "zerocopy_threshold": 0, 00:46:55.972 "tls_version": 0, 00:46:55.972 "enable_ktls": false 00:46:55.972 } 00:46:55.972 } 00:46:55.972 ] 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "subsystem": "vmd", 00:46:55.972 "config": [] 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "subsystem": "accel", 00:46:55.972 "config": [ 00:46:55.972 { 00:46:55.972 "method": "accel_set_options", 00:46:55.972 "params": { 00:46:55.972 "small_cache_size": 128, 00:46:55.972 "large_cache_size": 16, 00:46:55.972 "task_count": 2048, 00:46:55.972 "sequence_count": 2048, 00:46:55.972 "buf_count": 2048 00:46:55.972 } 00:46:55.972 } 00:46:55.972 ] 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "subsystem": "bdev", 00:46:55.972 "config": [ 00:46:55.972 { 00:46:55.972 "method": "bdev_set_options", 00:46:55.972 "params": { 00:46:55.972 "bdev_io_pool_size": 65535, 00:46:55.972 "bdev_io_cache_size": 256, 00:46:55.972 "bdev_auto_examine": true, 00:46:55.972 "iobuf_small_cache_size": 128, 00:46:55.972 "iobuf_large_cache_size": 16 00:46:55.972 } 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "method": "bdev_raid_set_options", 00:46:55.972 "params": { 00:46:55.972 "process_window_size_kb": 1024 00:46:55.972 } 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "method": "bdev_iscsi_set_options", 00:46:55.972 "params": { 00:46:55.972 "timeout_sec": 30 00:46:55.972 } 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "method": "bdev_nvme_set_options", 00:46:55.972 "params": { 00:46:55.972 "action_on_timeout": "none", 00:46:55.972 "timeout_us": 0, 00:46:55.972 "timeout_admin_us": 0, 00:46:55.972 "keep_alive_timeout_ms": 10000, 00:46:55.972 "arbitration_burst": 0, 00:46:55.972 "low_priority_weight": 0, 00:46:55.972 "medium_priority_weight": 0, 00:46:55.972 "high_priority_weight": 0, 00:46:55.972 "nvme_adminq_poll_period_us": 10000, 00:46:55.972 "nvme_ioq_poll_period_us": 0, 00:46:55.972 "io_queue_requests": 0, 00:46:55.972 "delay_cmd_submit": true, 00:46:55.972 "transport_retry_count": 4, 00:46:55.972 "bdev_retry_count": 3, 00:46:55.972 "transport_ack_timeout": 0, 00:46:55.972 "ctrlr_loss_timeout_sec": 0, 00:46:55.972 "reconnect_delay_sec": 0, 00:46:55.972 "fast_io_fail_timeout_sec": 0, 00:46:55.972 "disable_auto_failback": false, 00:46:55.972 "generate_uuids": false, 00:46:55.972 "transport_tos": 0, 00:46:55.972 "nvme_error_stat": false, 00:46:55.972 "rdma_srq_size": 0, 00:46:55.972 "io_path_stat": false, 00:46:55.972 "allow_accel_sequence": false, 00:46:55.972 "rdma_max_cq_size": 0, 00:46:55.972 "rdma_cm_event_timeout_ms": 0, 00:46:55.972 "dhchap_digests": [ 00:46:55.972 "sha256", 00:46:55.972 "sha384", 00:46:55.972 "sha512" 00:46:55.972 ], 00:46:55.972 "dhchap_dhgroups": [ 00:46:55.972 "null", 00:46:55.972 "ffdhe2048", 00:46:55.972 "ffdhe3072", 00:46:55.972 "ffdhe4096", 00:46:55.972 "ffdhe6144", 00:46:55.972 "ffdhe8192" 00:46:55.972 ] 00:46:55.972 } 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "method": "bdev_nvme_set_hotplug", 00:46:55.972 "params": { 00:46:55.972 "period_us": 100000, 00:46:55.972 "enable": false 00:46:55.972 } 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "method": "bdev_malloc_create", 00:46:55.972 "params": { 00:46:55.972 "name": "malloc0", 00:46:55.972 "num_blocks": 8192, 00:46:55.972 "block_size": 4096, 00:46:55.972 "physical_block_size": 4096, 00:46:55.972 "uuid": "74e3f8a2-5eb5-4564-aaa3-e7edc2cd76dc", 00:46:55.972 "optimal_io_boundary": 0 00:46:55.972 } 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "method": "bdev_wait_for_examine" 00:46:55.972 } 00:46:55.972 ] 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "subsystem": "nbd", 00:46:55.972 "config": [] 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "subsystem": "scheduler", 00:46:55.972 "config": [ 00:46:55.972 { 00:46:55.972 "method": "framework_set_scheduler", 00:46:55.972 "params": { 00:46:55.972 "name": "static" 00:46:55.972 } 00:46:55.972 } 00:46:55.972 ] 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "subsystem": "nvmf", 00:46:55.972 "config": [ 00:46:55.972 { 00:46:55.972 "method": "nvmf_set_config", 00:46:55.972 "params": { 00:46:55.972 "discovery_filter": "match_any", 00:46:55.972 "admin_cmd_passthru": { 00:46:55.972 "identify_ctrlr": false 00:46:55.972 } 00:46:55.972 } 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "method": "nvmf_set_max_subsystems", 00:46:55.972 "params": { 00:46:55.972 "max_subsystems": 1024 00:46:55.972 } 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "method": "nvmf_set_crdt", 00:46:55.972 "params": { 00:46:55.972 "crdt1": 0, 00:46:55.972 "crdt2": 0, 00:46:55.972 "crdt3": 0 00:46:55.972 } 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "method": "nvmf_create_transport", 00:46:55.972 "params": { 00:46:55.972 "trtype": "TCP", 00:46:55.972 "max_queue_depth": 128, 00:46:55.972 "max_io_qpairs_per_ctrlr": 127, 00:46:55.972 "in_capsule_data_size": 4096, 00:46:55.972 "max_io_size": 131072, 00:46:55.972 "io_unit_size": 131072, 00:46:55.972 "max_aq_depth": 128, 00:46:55.972 "num_shared_buffers": 511, 00:46:55.972 "buf_cache_size": 4294967295, 00:46:55.972 "dif_insert_or_strip": false, 00:46:55.972 "zcopy": false, 00:46:55.972 "c2h_success": false, 00:46:55.972 "sock_priority": 0, 00:46:55.972 "abort_timeout_sec": 1, 00:46:55.972 "ack_timeout": 0, 00:46:55.972 "data_wr_pool_size": 0 00:46:55.972 } 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "method": "nvmf_create_subsystem", 00:46:55.972 "params": { 00:46:55.972 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:55.972 "allow_any_host": false, 00:46:55.972 "serial_number": "00000000000000000000", 00:46:55.972 "model_number": "SPDK bdev Controller", 00:46:55.972 "max_namespaces": 32, 00:46:55.972 "min_cntlid": 1, 00:46:55.972 "max_cntlid": 65519, 00:46:55.972 "ana_reporting": false 00:46:55.972 } 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "method": "nvmf_subsystem_add_host", 00:46:55.972 "params": { 00:46:55.972 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:55.972 "host": "nqn.2016-06.io.spdk:host1", 00:46:55.972 "psk": "key0" 00:46:55.972 } 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "method": "nvmf_subsystem_add_ns", 00:46:55.972 "params": { 00:46:55.972 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:55.972 "namespace": { 00:46:55.972 "nsid": 1, 00:46:55.972 "bdev_name": "malloc0", 00:46:55.972 "nguid": "74E3F8A25EB54564AAA3E7EDC2CD76DC", 00:46:55.972 "uuid": "74e3f8a2-5eb5-4564-aaa3-e7edc2cd76dc", 00:46:55.972 "no_auto_visible": false 00:46:55.972 } 00:46:55.972 } 00:46:55.972 }, 00:46:55.972 { 00:46:55.972 "method": "nvmf_subsystem_add_listener", 00:46:55.972 "params": { 00:46:55.972 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:46:55.972 "listen_address": { 00:46:55.972 "trtype": "TCP", 00:46:55.972 "adrfam": "IPv4", 00:46:55.972 "traddr": "10.0.0.2", 00:46:55.972 "trsvcid": "4420" 00:46:55.972 }, 00:46:55.972 "secure_channel": true 00:46:55.972 } 00:46:55.972 } 00:46:55.972 ] 00:46:55.972 } 00:46:55.972 ] 00:46:55.972 }' 00:46:55.972 03:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2233235 00:46:55.972 03:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2233235 00:46:55.972 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2233235 ']' 00:46:55.972 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:55.972 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:55.972 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:55.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:55.973 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:55.973 03:43:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:55.973 03:43:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:46:55.973 [2024-06-11 03:43:37.352207] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:55.973 [2024-06-11 03:43:37.352250] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:56.232 EAL: No free 2048 kB hugepages reported on node 1 00:46:56.232 [2024-06-11 03:43:37.414813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:56.232 [2024-06-11 03:43:37.454887] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:56.232 [2024-06-11 03:43:37.454924] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:56.232 [2024-06-11 03:43:37.454931] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:56.232 [2024-06-11 03:43:37.454937] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:56.232 [2024-06-11 03:43:37.454945] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:56.232 [2024-06-11 03:43:37.455021] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:46:56.491 [2024-06-11 03:43:37.660496] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:56.491 [2024-06-11 03:43:37.692520] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:56.491 [2024-06-11 03:43:37.704318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:56.748 03:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:56.748 03:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:56.748 03:43:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:46:56.748 03:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@729 -- # xtrace_disable 00:46:56.748 03:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:57.006 03:43:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:57.006 03:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2233324 00:46:57.006 03:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2233324 /var/tmp/bdevperf.sock 00:46:57.006 03:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # '[' -z 2233324 ']' 00:46:57.006 03:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:46:57.006 03:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # local max_retries=100 00:46:57.006 03:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:46:57.006 03:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:46:57.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:46:57.006 03:43:38 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:46:57.006 "subsystems": [ 00:46:57.006 { 00:46:57.006 "subsystem": "keyring", 00:46:57.006 "config": [ 00:46:57.006 { 00:46:57.006 "method": "keyring_file_add_key", 00:46:57.006 "params": { 00:46:57.006 "name": "key0", 00:46:57.006 "path": "/tmp/tmp.0KRsigaXmY" 00:46:57.006 } 00:46:57.006 } 00:46:57.006 ] 00:46:57.006 }, 00:46:57.006 { 00:46:57.006 "subsystem": "iobuf", 00:46:57.006 "config": [ 00:46:57.006 { 00:46:57.006 "method": "iobuf_set_options", 00:46:57.006 "params": { 00:46:57.006 "small_pool_count": 8192, 00:46:57.006 "large_pool_count": 1024, 00:46:57.006 "small_bufsize": 8192, 00:46:57.006 "large_bufsize": 135168 00:46:57.006 } 00:46:57.006 } 00:46:57.006 ] 00:46:57.006 }, 00:46:57.006 { 00:46:57.006 "subsystem": "sock", 00:46:57.006 "config": [ 00:46:57.006 { 00:46:57.006 "method": "sock_set_default_impl", 00:46:57.006 "params": { 00:46:57.006 "impl_name": "posix" 00:46:57.006 } 00:46:57.006 }, 00:46:57.006 { 00:46:57.006 "method": "sock_impl_set_options", 00:46:57.006 "params": { 00:46:57.006 "impl_name": "ssl", 00:46:57.006 "recv_buf_size": 4096, 00:46:57.006 "send_buf_size": 4096, 00:46:57.006 "enable_recv_pipe": true, 00:46:57.006 "enable_quickack": false, 00:46:57.006 "enable_placement_id": 0, 00:46:57.006 "enable_zerocopy_send_server": true, 00:46:57.006 "enable_zerocopy_send_client": false, 00:46:57.006 "zerocopy_threshold": 0, 00:46:57.006 "tls_version": 0, 00:46:57.006 "enable_ktls": false 00:46:57.006 } 00:46:57.006 }, 00:46:57.006 { 00:46:57.006 "method": "sock_impl_set_options", 00:46:57.006 "params": { 00:46:57.006 "impl_name": "posix", 00:46:57.006 "recv_buf_size": 2097152, 00:46:57.006 "send_buf_size": 2097152, 00:46:57.006 "enable_recv_pipe": true, 00:46:57.006 "enable_quickack": false, 00:46:57.006 "enable_placement_id": 0, 00:46:57.006 "enable_zerocopy_send_server": true, 00:46:57.006 "enable_zerocopy_send_client": false, 00:46:57.006 "zerocopy_threshold": 0, 00:46:57.006 "tls_version": 0, 00:46:57.006 "enable_ktls": false 00:46:57.006 } 00:46:57.006 } 00:46:57.006 ] 00:46:57.006 }, 00:46:57.006 { 00:46:57.006 "subsystem": "vmd", 00:46:57.006 "config": [] 00:46:57.007 }, 00:46:57.007 { 00:46:57.007 "subsystem": "accel", 00:46:57.007 "config": [ 00:46:57.007 { 00:46:57.007 "method": "accel_set_options", 00:46:57.007 "params": { 00:46:57.007 "small_cache_size": 128, 00:46:57.007 "large_cache_size": 16, 00:46:57.007 "task_count": 2048, 00:46:57.007 "sequence_count": 2048, 00:46:57.007 "buf_count": 2048 00:46:57.007 } 00:46:57.007 } 00:46:57.007 ] 00:46:57.007 }, 00:46:57.007 { 00:46:57.007 "subsystem": "bdev", 00:46:57.007 "config": [ 00:46:57.007 { 00:46:57.007 "method": "bdev_set_options", 00:46:57.007 "params": { 00:46:57.007 "bdev_io_pool_size": 65535, 00:46:57.007 "bdev_io_cache_size": 256, 00:46:57.007 "bdev_auto_examine": true, 00:46:57.007 "iobuf_small_cache_size": 128, 00:46:57.007 "iobuf_large_cache_size": 16 00:46:57.007 } 00:46:57.007 }, 00:46:57.007 { 00:46:57.007 "method": "bdev_raid_set_options", 00:46:57.007 "params": { 00:46:57.007 "process_window_size_kb": 1024 00:46:57.007 } 00:46:57.007 }, 00:46:57.007 { 00:46:57.007 "method": "bdev_iscsi_set_options", 00:46:57.007 "params": { 00:46:57.007 "timeout_sec": 30 00:46:57.007 } 00:46:57.007 }, 00:46:57.007 { 00:46:57.007 "method": "bdev_nvme_set_options", 00:46:57.007 "params": { 00:46:57.007 "action_on_timeout": "none", 00:46:57.007 "timeout_us": 0, 00:46:57.007 "timeout_admin_us": 0, 00:46:57.007 "keep_alive_timeout_ms": 10000, 00:46:57.007 "arbitration_burst": 0, 00:46:57.007 "low_priority_weight": 0, 00:46:57.007 "medium_priority_weight": 0, 00:46:57.007 "high_priority_weight": 0, 00:46:57.007 "nvme_adminq_poll_period_us": 10000, 00:46:57.007 "nvme_ioq_poll_period_us": 0, 00:46:57.007 "io_queue_requests": 512, 00:46:57.007 "delay_cmd_submit": true, 00:46:57.007 "transport_retry_count": 4, 00:46:57.007 "bdev_retry_count": 3, 00:46:57.007 "transport_ack_timeout": 0, 00:46:57.007 "ctrlr_loss_timeout_sec": 0, 00:46:57.007 "reconnect_delay_sec": 0, 00:46:57.007 "fast_io_fail_timeout_sec": 0, 00:46:57.007 "disable_auto_failback": false, 00:46:57.007 "generate_uuids": false, 00:46:57.007 "transport_tos": 0, 00:46:57.007 "nvme_error_stat": false, 00:46:57.007 "rdma_srq_size": 0, 00:46:57.007 "io_path_stat": false, 00:46:57.007 "allow_accel_sequence": false, 00:46:57.007 "rdma_max_cq_size": 0, 00:46:57.007 "rdma_cm_event_timeout_ms": 0, 00:46:57.007 "dhchap_digests": [ 00:46:57.007 "sha256", 00:46:57.007 "sha384", 00:46:57.007 "sha512" 00:46:57.007 ], 00:46:57.007 "dhchap_dhgroups": [ 00:46:57.007 "null", 00:46:57.007 "ffdhe2048", 00:46:57.007 "ffdhe3072", 00:46:57.007 "ffdhe4096", 00:46:57.007 "ffdhe6144", 00:46:57.007 "ffdhe8192" 00:46:57.007 ] 00:46:57.007 } 00:46:57.007 }, 00:46:57.007 { 00:46:57.007 "method": "bdev_nvme_attach_controller", 00:46:57.007 "params": { 00:46:57.007 "name": "nvme0", 00:46:57.007 "trtype": "TCP", 00:46:57.007 "adrfam": "IPv4", 00:46:57.007 "traddr": "10.0.0.2", 00:46:57.007 "trsvcid": "4420", 00:46:57.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:46:57.007 "prchk_reftag": false, 00:46:57.007 "prchk_guard": false, 00:46:57.007 "ctrlr_loss_timeout_sec": 0, 00:46:57.007 "reconnect_delay_sec": 0, 00:46:57.007 "fast_io_fail_timeout_sec": 0, 00:46:57.007 "psk": "key0", 00:46:57.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:46:57.007 "hdgst": false, 00:46:57.007 "ddgst": false 00:46:57.007 } 00:46:57.007 }, 00:46:57.007 { 00:46:57.007 "method": "bdev_nvme_set_hotplug", 00:46:57.007 "params": { 00:46:57.007 "period_us": 100000, 00:46:57.007 "enable": false 00:46:57.007 } 00:46:57.007 }, 00:46:57.007 { 00:46:57.007 "method": "bdev_enable_histogram", 00:46:57.007 "params": { 00:46:57.007 "name": "nvme0n1", 00:46:57.007 "enable": true 00:46:57.007 } 00:46:57.007 }, 00:46:57.007 { 00:46:57.007 "method": "bdev_wait_for_examine" 00:46:57.007 } 00:46:57.007 ] 00:46:57.007 }, 00:46:57.007 { 00:46:57.007 "subsystem": "nbd", 00:46:57.007 "config": [] 00:46:57.007 } 00:46:57.007 ] 00:46:57.007 }' 00:46:57.007 03:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@839 -- # xtrace_disable 00:46:57.007 03:43:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:46:57.007 [2024-06-11 03:43:38.215043] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:46:57.007 [2024-06-11 03:43:38.215090] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2233324 ] 00:46:57.007 EAL: No free 2048 kB hugepages reported on node 1 00:46:57.007 [2024-06-11 03:43:38.274401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:57.007 [2024-06-11 03:43:38.314701] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:46:57.265 [2024-06-11 03:43:38.460864] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:57.832 03:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:46:57.832 03:43:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@863 -- # return 0 00:46:57.832 03:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:46:57.832 03:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:46:57.832 03:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:46:57.832 03:43:39 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:46:58.090 Running I/O for 1 seconds... 00:46:59.028 00:46:59.028 Latency(us) 00:46:59.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:59.028 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:46:59.028 Verification LBA range: start 0x0 length 0x2000 00:46:59.028 nvme0n1 : 1.02 2639.83 10.31 0.00 0.00 48073.68 7146.54 65910.49 00:46:59.028 =================================================================================================================== 00:46:59.028 Total : 2639.83 10.31 0.00 0.00 48073.68 7146.54 65910.49 00:46:59.028 0 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # type=--id 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # id=0 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # for n in $shm_files 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:46:59.028 nvmf_trace.0 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@822 -- # return 0 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2233324 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2233324 ']' 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2233324 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2233324 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2233324' 00:46:59.028 killing process with pid 2233324 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2233324 00:46:59.028 Received shutdown signal, test time was about 1.000000 seconds 00:46:59.028 00:46:59.028 Latency(us) 00:46:59.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:59.028 =================================================================================================================== 00:46:59.028 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:59.028 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2233324 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:46:59.287 rmmod nvme_tcp 00:46:59.287 rmmod nvme_fabrics 00:46:59.287 rmmod nvme_keyring 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2233235 ']' 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2233235 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@949 -- # '[' -z 2233235 ']' 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # kill -0 2233235 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # uname 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:46:59.287 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2233235 00:46:59.546 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:46:59.546 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:46:59.546 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2233235' 00:46:59.546 killing process with pid 2233235 00:46:59.546 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@968 -- # kill 2233235 00:46:59.546 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@973 -- # wait 2233235 00:46:59.546 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:46:59.546 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:46:59.546 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:46:59.546 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:46:59.546 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:46:59.546 03:43:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:59.546 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:46:59.546 03:43:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:02.101 03:43:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:47:02.101 03:43:42 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.tHQilQxrDN /tmp/tmp.HuDVDFgfy5 /tmp/tmp.0KRsigaXmY 00:47:02.101 00:47:02.101 real 1m15.357s 00:47:02.101 user 1m52.016s 00:47:02.101 sys 0m28.848s 00:47:02.101 03:43:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:02.101 03:43:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:47:02.101 ************************************ 00:47:02.101 END TEST nvmf_tls 00:47:02.101 ************************************ 00:47:02.101 03:43:42 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:47:02.101 03:43:42 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:47:02.101 03:43:42 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:02.101 03:43:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:02.101 ************************************ 00:47:02.101 START TEST nvmf_fips 00:47:02.101 ************************************ 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:47:02.101 * Looking for test storage... 00:47:02.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:02.101 03:43:43 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:47:02.102 Error setting digest 00:47:02.102 002215C4197F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:47:02.102 002215C4197F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:47:02.102 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:47:02.103 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:47:02.103 03:43:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:47:02.103 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:47:02.103 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:02.103 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:47:02.103 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:47:02.103 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:47:02.103 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:02.103 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:02.103 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:02.103 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:47:02.103 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:47:02.103 03:43:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:47:02.103 03:43:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:47:08.733 Found 0000:86:00.0 (0x8086 - 0x159b) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:47:08.733 Found 0000:86:00.1 (0x8086 - 0x159b) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:47:08.733 Found net devices under 0000:86:00.0: cvl_0_0 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:47:08.733 Found net devices under 0000:86:00.1: cvl_0_1 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:47:08.733 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:47:08.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:08.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:47:08.734 00:47:08.734 --- 10.0.0.2 ping statistics --- 00:47:08.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:08.734 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:08.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:08.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:47:08.734 00:47:08.734 --- 10.0.0.1 ping statistics --- 00:47:08.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:08.734 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@723 -- # xtrace_disable 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2237632 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2237632 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 2237632 ']' 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:08.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:47:08.734 03:43:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:47:08.734 [2024-06-11 03:43:49.556078] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:47:08.734 [2024-06-11 03:43:49.556123] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:08.734 EAL: No free 2048 kB hugepages reported on node 1 00:47:08.734 [2024-06-11 03:43:49.616806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:08.734 [2024-06-11 03:43:49.657435] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:08.734 [2024-06-11 03:43:49.657469] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:08.734 [2024-06-11 03:43:49.657477] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:08.734 [2024-06-11 03:43:49.657482] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:08.734 [2024-06-11 03:43:49.657487] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:08.734 [2024-06-11 03:43:49.657504] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:47:08.992 03:43:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:47:08.992 03:43:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:47:08.992 03:43:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:47:08.992 03:43:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@729 -- # xtrace_disable 00:47:08.992 03:43:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:47:08.992 03:43:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:08.992 03:43:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:47:08.992 03:43:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:47:08.992 03:43:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:47:08.992 03:43:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:47:08.992 03:43:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:47:08.992 03:43:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:47:08.992 03:43:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:47:08.992 03:43:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:47:09.251 [2024-06-11 03:43:50.519307] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:09.251 [2024-06-11 03:43:50.535313] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:09.251 [2024-06-11 03:43:50.535447] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:09.251 [2024-06-11 03:43:50.563217] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:47:09.251 malloc0 00:47:09.251 03:43:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:47:09.251 03:43:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2237879 00:47:09.251 03:43:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2237879 /var/tmp/bdevperf.sock 00:47:09.251 03:43:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:47:09.251 03:43:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # '[' -z 2237879 ']' 00:47:09.251 03:43:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:47:09.251 03:43:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # local max_retries=100 00:47:09.251 03:43:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:47:09.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:47:09.251 03:43:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@839 -- # xtrace_disable 00:47:09.251 03:43:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:47:09.251 [2024-06-11 03:43:50.643816] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:47:09.251 [2024-06-11 03:43:50.643869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237879 ] 00:47:09.510 EAL: No free 2048 kB hugepages reported on node 1 00:47:09.510 [2024-06-11 03:43:50.698716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:09.510 [2024-06-11 03:43:50.737829] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:47:10.078 03:43:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:47:10.078 03:43:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@863 -- # return 0 00:47:10.078 03:43:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:47:10.337 [2024-06-11 03:43:51.582204] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:10.337 [2024-06-11 03:43:51.582288] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:47:10.337 TLSTESTn1 00:47:10.337 03:43:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:47:10.337 Running I/O for 10 seconds... 00:47:22.541 00:47:22.541 Latency(us) 00:47:22.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:22.541 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:47:22.541 Verification LBA range: start 0x0 length 0x2000 00:47:22.541 TLSTESTn1 : 10.01 5641.52 22.04 0.00 0.00 22653.15 6709.64 61416.59 00:47:22.541 =================================================================================================================== 00:47:22.541 Total : 5641.52 22.04 0.00 0.00 22653.15 6709.64 61416.59 00:47:22.541 0 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # type=--id 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # id=0 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # for n in $shm_files 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:47:22.541 nvmf_trace.0 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@822 -- # return 0 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2237879 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 2237879 ']' 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 2237879 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2237879 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2237879' 00:47:22.541 killing process with pid 2237879 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 2237879 00:47:22.541 Received shutdown signal, test time was about 10.000000 seconds 00:47:22.541 00:47:22.541 Latency(us) 00:47:22.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:22.541 =================================================================================================================== 00:47:22.541 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:22.541 [2024-06-11 03:44:01.921981] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:47:22.541 03:44:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 2237879 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:47:22.541 rmmod nvme_tcp 00:47:22.541 rmmod nvme_fabrics 00:47:22.541 rmmod nvme_keyring 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2237632 ']' 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2237632 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@949 -- # '[' -z 2237632 ']' 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # kill -0 2237632 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # uname 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2237632 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2237632' 00:47:22.541 killing process with pid 2237632 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@968 -- # kill 2237632 00:47:22.541 [2024-06-11 03:44:02.213042] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@973 -- # wait 2237632 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:47:22.541 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:47:22.542 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:47:22.542 03:44:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:22.542 03:44:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:22.542 03:44:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:23.110 03:44:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:47:23.110 03:44:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:47:23.110 00:47:23.110 real 0m21.457s 00:47:23.110 user 0m22.769s 00:47:23.110 sys 0m9.450s 00:47:23.110 03:44:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1125 -- # xtrace_disable 00:47:23.110 03:44:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:47:23.110 ************************************ 00:47:23.110 END TEST nvmf_fips 00:47:23.110 ************************************ 00:47:23.110 03:44:04 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:47:23.110 03:44:04 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:47:23.110 03:44:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:47:23.110 03:44:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:47:23.110 03:44:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:23.369 ************************************ 00:47:23.369 START TEST nvmf_fuzz 00:47:23.369 ************************************ 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:47:23.369 * Looking for test storage... 00:47:23.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:47:23.369 03:44:04 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:47:29.930 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:47:29.931 Found 0000:86:00.0 (0x8086 - 0x159b) 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:47:29.931 Found 0000:86:00.1 (0x8086 - 0x159b) 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:47:29.931 Found net devices under 0000:86:00.0: cvl_0_0 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:47:29.931 Found net devices under 0000:86:00.1: cvl_0_1 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:47:29.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:29.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:47:29.931 00:47:29.931 --- 10.0.0.2 ping statistics --- 00:47:29.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:29.931 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:47:29.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:29.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:47:29.931 00:47:29.931 --- 10.0.0.1 ping statistics --- 00:47:29.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:29.931 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2243524 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2243524 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@830 -- # '[' -z 2243524 ']' 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:29.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@863 -- # return 0 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:47:29.931 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:47:29.932 Malloc0 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:47:29.932 03:44:10 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:48:01.987 Fuzzing completed. Shutting down the fuzz application 00:48:01.987 00:48:01.987 Dumping successful admin opcodes: 00:48:01.987 8, 9, 10, 24, 00:48:01.987 Dumping successful io opcodes: 00:48:01.987 0, 9, 00:48:01.987 NS: 0x200003aeff00 I/O qp, Total commands completed: 915697, total successful commands: 5327, random_seed: 3829777536 00:48:01.987 NS: 0x200003aeff00 admin qp, Total commands completed: 91627, total successful commands: 740, random_seed: 1890987840 00:48:01.987 03:44:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:48:01.987 Fuzzing completed. Shutting down the fuzz application 00:48:01.987 00:48:01.987 Dumping successful admin opcodes: 00:48:01.987 24, 00:48:01.987 Dumping successful io opcodes: 00:48:01.987 00:48:01.987 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 276167772 00:48:01.987 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 276242676 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:48:01.987 rmmod nvme_tcp 00:48:01.987 rmmod nvme_fabrics 00:48:01.987 rmmod nvme_keyring 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2243524 ']' 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2243524 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@949 -- # '[' -z 2243524 ']' 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # kill -0 2243524 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # uname 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2243524 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2243524' 00:48:01.987 killing process with pid 2243524 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@968 -- # kill 2243524 00:48:01.987 03:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@973 -- # wait 2243524 00:48:02.244 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:48:02.244 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:48:02.244 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:48:02.244 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:48:02.244 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:48:02.244 03:44:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:02.244 03:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:48:02.245 03:44:43 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:04.142 03:44:45 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:48:04.142 03:44:45 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:48:04.142 00:48:04.142 real 0m40.966s 00:48:04.142 user 0m53.500s 00:48:04.142 sys 0m17.045s 00:48:04.142 03:44:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:48:04.142 03:44:45 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:48:04.142 ************************************ 00:48:04.142 END TEST nvmf_fuzz 00:48:04.142 ************************************ 00:48:04.142 03:44:45 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:48:04.142 03:44:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:48:04.142 03:44:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:48:04.142 03:44:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:48:04.399 ************************************ 00:48:04.399 START TEST nvmf_multiconnection 00:48:04.399 ************************************ 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:48:04.399 * Looking for test storage... 00:48:04.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:48:04.399 03:44:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:48:10.987 Found 0000:86:00.0 (0x8086 - 0x159b) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:48:10.987 Found 0000:86:00.1 (0x8086 - 0x159b) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:48:10.987 Found net devices under 0000:86:00.0: cvl_0_0 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:48:10.987 Found net devices under 0000:86:00.1: cvl_0_1 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:48:10.987 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:48:10.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:10.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:48:10.988 00:48:10.988 --- 10.0.0.2 ping statistics --- 00:48:10.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:10.988 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:48:10.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:10.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:48:10.988 00:48:10.988 --- 10.0.0.1 ping statistics --- 00:48:10.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:10.988 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@723 -- # xtrace_disable 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2252577 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2252577 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@830 -- # '[' -z 2252577 ']' 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local max_retries=100 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:10.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@839 -- # xtrace_disable 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.988 [2024-06-11 03:44:51.719932] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:48:10.988 [2024-06-11 03:44:51.719976] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:10.988 EAL: No free 2048 kB hugepages reported on node 1 00:48:10.988 [2024-06-11 03:44:51.783272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:48:10.988 [2024-06-11 03:44:51.827869] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:10.988 [2024-06-11 03:44:51.827908] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:10.988 [2024-06-11 03:44:51.827915] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:10.988 [2024-06-11 03:44:51.827923] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:10.988 [2024-06-11 03:44:51.827929] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:10.988 [2024-06-11 03:44:51.827974] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:48:10.988 [2024-06-11 03:44:51.828075] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:48:10.988 [2024-06-11 03:44:51.828098] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:48:10.988 [2024-06-11 03:44:51.828099] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@863 -- # return 0 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@729 -- # xtrace_disable 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.988 [2024-06-11 03:44:51.966015] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.988 Malloc1 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.988 03:44:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.988 [2024-06-11 03:44:52.021593] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.988 Malloc2 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.988 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 Malloc3 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 Malloc4 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 Malloc5 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 Malloc6 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 Malloc7 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 Malloc8 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 Malloc9 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.989 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.990 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:48:10.990 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.990 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:10.990 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:10.990 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:48:10.990 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:10.990 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:11.248 Malloc10 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:11.248 Malloc11 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:11.248 03:44:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:48:12.622 03:44:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:48:12.622 03:44:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:48:12.622 03:44:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:48:12.622 03:44:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:48:12.622 03:44:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:48:14.523 03:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:48:14.523 03:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:48:14.523 03:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK1 00:48:14.523 03:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:48:14.523 03:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:48:14.523 03:44:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:48:14.523 03:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:14.523 03:44:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:48:15.458 03:44:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:48:15.458 03:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:48:15.458 03:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:48:15.458 03:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:48:15.458 03:44:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:48:17.990 03:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:48:17.990 03:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:48:17.990 03:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK2 00:48:17.990 03:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:48:17.990 03:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:48:17.990 03:44:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:48:17.990 03:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:17.990 03:44:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:48:18.926 03:45:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:48:18.926 03:45:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:48:18.926 03:45:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:48:18.926 03:45:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:48:18.926 03:45:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:48:20.830 03:45:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:48:20.830 03:45:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:48:20.830 03:45:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK3 00:48:20.830 03:45:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:48:20.830 03:45:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:48:20.830 03:45:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:48:20.830 03:45:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:20.830 03:45:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:48:22.207 03:45:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:48:22.207 03:45:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:48:22.207 03:45:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:48:22.207 03:45:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:48:22.207 03:45:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:48:24.111 03:45:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:48:24.111 03:45:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:48:24.111 03:45:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK4 00:48:24.111 03:45:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:48:24.111 03:45:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:48:24.111 03:45:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:48:24.111 03:45:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:24.111 03:45:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:48:25.487 03:45:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:48:25.487 03:45:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:48:25.487 03:45:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:48:25.487 03:45:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:48:25.487 03:45:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:48:27.390 03:45:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:48:27.390 03:45:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:48:27.390 03:45:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK5 00:48:27.390 03:45:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:48:27.390 03:45:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:48:27.390 03:45:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:48:27.390 03:45:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:27.391 03:45:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:48:28.767 03:45:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:48:28.767 03:45:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:48:28.767 03:45:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:48:28.767 03:45:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:48:28.767 03:45:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:48:30.669 03:45:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:48:30.669 03:45:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:48:30.669 03:45:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK6 00:48:30.669 03:45:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:48:30.669 03:45:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:48:30.669 03:45:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:48:30.669 03:45:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:30.669 03:45:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:48:32.045 03:45:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:48:32.045 03:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:48:32.045 03:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:48:32.045 03:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:48:32.045 03:45:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:48:33.949 03:45:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:48:33.949 03:45:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:48:33.949 03:45:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK7 00:48:33.949 03:45:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:48:33.949 03:45:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:48:33.949 03:45:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:48:33.949 03:45:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:33.949 03:45:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:48:35.355 03:45:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:48:35.355 03:45:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:48:35.355 03:45:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:48:35.355 03:45:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:48:35.355 03:45:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:48:37.288 03:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:48:37.289 03:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:48:37.289 03:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK8 00:48:37.289 03:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:48:37.289 03:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:48:37.289 03:45:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:48:37.289 03:45:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:37.289 03:45:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:48:38.666 03:45:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:48:38.666 03:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:48:38.666 03:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:48:38.666 03:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:48:38.666 03:45:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:48:40.569 03:45:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:48:40.569 03:45:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:48:40.569 03:45:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK9 00:48:40.569 03:45:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:48:40.569 03:45:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:48:40.569 03:45:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:48:40.569 03:45:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:40.569 03:45:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:48:41.958 03:45:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:48:41.958 03:45:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:48:41.958 03:45:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:48:41.958 03:45:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:48:41.958 03:45:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:48:44.488 03:45:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:48:44.488 03:45:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:48:44.488 03:45:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK10 00:48:44.488 03:45:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:48:44.488 03:45:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:48:44.488 03:45:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:48:44.488 03:45:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:48:44.488 03:45:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:48:45.423 03:45:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:48:45.423 03:45:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:48:45.423 03:45:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:48:45.423 03:45:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:48:45.423 03:45:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:48:47.955 03:45:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:48:47.955 03:45:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:48:47.955 03:45:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK11 00:48:47.955 03:45:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:48:47.955 03:45:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:48:47.955 03:45:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:48:47.955 03:45:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:48:47.955 [global] 00:48:47.955 thread=1 00:48:47.955 invalidate=1 00:48:47.955 rw=read 00:48:47.955 time_based=1 00:48:47.955 runtime=10 00:48:47.955 ioengine=libaio 00:48:47.955 direct=1 00:48:47.955 bs=262144 00:48:47.955 iodepth=64 00:48:47.955 norandommap=1 00:48:47.955 numjobs=1 00:48:47.955 00:48:47.955 [job0] 00:48:47.955 filename=/dev/nvme0n1 00:48:47.955 [job1] 00:48:47.955 filename=/dev/nvme10n1 00:48:47.955 [job2] 00:48:47.955 filename=/dev/nvme1n1 00:48:47.955 [job3] 00:48:47.955 filename=/dev/nvme2n1 00:48:47.955 [job4] 00:48:47.955 filename=/dev/nvme3n1 00:48:47.955 [job5] 00:48:47.955 filename=/dev/nvme4n1 00:48:47.955 [job6] 00:48:47.955 filename=/dev/nvme5n1 00:48:47.955 [job7] 00:48:47.955 filename=/dev/nvme6n1 00:48:47.955 [job8] 00:48:47.955 filename=/dev/nvme7n1 00:48:47.955 [job9] 00:48:47.955 filename=/dev/nvme8n1 00:48:47.955 [job10] 00:48:47.955 filename=/dev/nvme9n1 00:48:47.955 Could not set queue depth (nvme0n1) 00:48:47.955 Could not set queue depth (nvme10n1) 00:48:47.955 Could not set queue depth (nvme1n1) 00:48:47.955 Could not set queue depth (nvme2n1) 00:48:47.955 Could not set queue depth (nvme3n1) 00:48:47.955 Could not set queue depth (nvme4n1) 00:48:47.955 Could not set queue depth (nvme5n1) 00:48:47.955 Could not set queue depth (nvme6n1) 00:48:47.955 Could not set queue depth (nvme7n1) 00:48:47.955 Could not set queue depth (nvme8n1) 00:48:47.955 Could not set queue depth (nvme9n1) 00:48:47.955 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:48:47.955 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:48:47.955 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:48:47.955 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:48:47.955 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:48:47.955 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:48:47.955 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:48:47.955 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:48:47.955 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:48:47.955 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:48:47.955 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:48:47.955 fio-3.35 00:48:47.955 Starting 11 threads 00:49:00.167 00:49:00.167 job0: (groupid=0, jobs=1): err= 0: pid=2259538: Tue Jun 11 03:45:39 2024 00:49:00.167 read: IOPS=663, BW=166MiB/s (174MB/s)(1672MiB/10076msec) 00:49:00.167 slat (usec): min=11, max=85818, avg=955.18, stdev=4105.10 00:49:00.167 clat (usec): min=1873, max=234063, avg=95359.28, stdev=45841.80 00:49:00.167 lat (usec): min=1906, max=235241, avg=96314.46, stdev=46453.50 00:49:00.167 clat percentiles (msec): 00:49:00.167 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 27], 20.00th=[ 53], 00:49:00.167 | 30.00th=[ 72], 40.00th=[ 86], 50.00th=[ 104], 60.00th=[ 114], 00:49:00.167 | 70.00th=[ 124], 80.00th=[ 134], 90.00th=[ 150], 95.00th=[ 165], 00:49:00.167 | 99.00th=[ 190], 99.50th=[ 197], 99.90th=[ 213], 99.95th=[ 222], 00:49:00.167 | 99.99th=[ 234] 00:49:00.167 bw ( KiB/s): min=90112, max=233984, per=7.99%, avg=169625.60, stdev=41538.35, samples=20 00:49:00.167 iops : min= 352, max= 914, avg=662.60, stdev=162.24, samples=20 00:49:00.167 lat (msec) : 2=0.01%, 4=1.23%, 10=2.63%, 20=4.04%, 50=11.20% 00:49:00.167 lat (msec) : 100=28.17%, 250=52.73% 00:49:00.167 cpu : usr=0.25%, sys=2.34%, ctx=1642, majf=0, minf=4097 00:49:00.167 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:49:00.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:00.167 issued rwts: total=6689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:00.167 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:00.167 job1: (groupid=0, jobs=1): err= 0: pid=2259539: Tue Jun 11 03:45:39 2024 00:49:00.167 read: IOPS=841, BW=210MiB/s (220MB/s)(2119MiB/10079msec) 00:49:00.167 slat (usec): min=8, max=118087, avg=630.07, stdev=3138.38 00:49:00.167 clat (usec): min=977, max=210155, avg=75406.17, stdev=45290.27 00:49:00.167 lat (usec): min=1017, max=276352, avg=76036.24, stdev=45712.37 00:49:00.167 clat percentiles (msec): 00:49:00.167 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 14], 20.00th=[ 31], 00:49:00.167 | 30.00th=[ 46], 40.00th=[ 62], 50.00th=[ 77], 60.00th=[ 87], 00:49:00.167 | 70.00th=[ 102], 80.00th=[ 117], 90.00th=[ 138], 95.00th=[ 153], 00:49:00.167 | 99.00th=[ 184], 99.50th=[ 199], 99.90th=[ 209], 99.95th=[ 209], 00:49:00.167 | 99.99th=[ 211] 00:49:00.167 bw ( KiB/s): min=108032, max=323719, per=10.15%, avg=215430.75, stdev=69177.09, samples=20 00:49:00.167 iops : min= 422, max= 1264, avg=841.50, stdev=270.18, samples=20 00:49:00.167 lat (usec) : 1000=0.04% 00:49:00.167 lat (msec) : 2=0.41%, 4=1.33%, 10=6.43%, 20=6.15%, 50=18.44% 00:49:00.167 lat (msec) : 100=36.48%, 250=30.73% 00:49:00.167 cpu : usr=0.19%, sys=2.96%, ctx=2097, majf=0, minf=4097 00:49:00.167 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:49:00.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:00.167 issued rwts: total=8477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:00.167 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:00.167 job2: (groupid=0, jobs=1): err= 0: pid=2259540: Tue Jun 11 03:45:39 2024 00:49:00.167 read: IOPS=699, BW=175MiB/s (183MB/s)(1762MiB/10080msec) 00:49:00.167 slat (usec): min=8, max=109895, avg=1028.83, stdev=4015.38 00:49:00.167 clat (msec): min=2, max=279, avg=90.43, stdev=37.00 00:49:00.167 lat (msec): min=2, max=303, avg=91.46, stdev=37.52 00:49:00.167 clat percentiles (msec): 00:49:00.167 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 43], 20.00th=[ 61], 00:49:00.167 | 30.00th=[ 77], 40.00th=[ 83], 50.00th=[ 90], 60.00th=[ 100], 00:49:00.167 | 70.00th=[ 108], 80.00th=[ 118], 90.00th=[ 136], 95.00th=[ 153], 00:49:00.167 | 99.00th=[ 194], 99.50th=[ 201], 99.90th=[ 222], 99.95th=[ 226], 00:49:00.167 | 99.99th=[ 279] 00:49:00.167 bw ( KiB/s): min=91136, max=267776, per=8.42%, avg=178764.80, stdev=48142.41, samples=20 00:49:00.167 iops : min= 356, max= 1046, avg=698.30, stdev=188.06, samples=20 00:49:00.167 lat (msec) : 4=0.09%, 10=1.05%, 20=2.87%, 50=10.73%, 100=46.21% 00:49:00.167 lat (msec) : 250=39.03%, 500=0.03% 00:49:00.167 cpu : usr=0.32%, sys=2.76%, ctx=1605, majf=0, minf=4097 00:49:00.167 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:49:00.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:00.167 issued rwts: total=7046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:00.167 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:00.167 job3: (groupid=0, jobs=1): err= 0: pid=2259541: Tue Jun 11 03:45:39 2024 00:49:00.167 read: IOPS=763, BW=191MiB/s (200MB/s)(1925MiB/10083msec) 00:49:00.167 slat (usec): min=10, max=127482, avg=895.72, stdev=4141.27 00:49:00.167 clat (usec): min=1290, max=225821, avg=82809.45, stdev=50393.83 00:49:00.167 lat (usec): min=1319, max=253315, avg=83705.16, stdev=51041.83 00:49:00.167 clat percentiles (msec): 00:49:00.167 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 14], 20.00th=[ 28], 00:49:00.167 | 30.00th=[ 47], 40.00th=[ 69], 50.00th=[ 86], 60.00th=[ 102], 00:49:00.167 | 70.00th=[ 116], 80.00th=[ 127], 90.00th=[ 146], 95.00th=[ 165], 00:49:00.167 | 99.00th=[ 199], 99.50th=[ 218], 99.90th=[ 222], 99.95th=[ 224], 00:49:00.167 | 99.99th=[ 226] 00:49:00.167 bw ( KiB/s): min=83968, max=441856, per=9.21%, avg=195507.20, stdev=87837.21, samples=20 00:49:00.167 iops : min= 328, max= 1726, avg=763.70, stdev=343.11, samples=20 00:49:00.167 lat (msec) : 2=0.22%, 4=1.68%, 10=5.64%, 20=6.77%, 50=16.75% 00:49:00.167 lat (msec) : 100=28.72%, 250=40.23% 00:49:00.167 cpu : usr=0.25%, sys=2.80%, ctx=1768, majf=0, minf=3347 00:49:00.167 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:49:00.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:00.167 issued rwts: total=7701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:00.167 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:00.167 job4: (groupid=0, jobs=1): err= 0: pid=2259542: Tue Jun 11 03:45:39 2024 00:49:00.167 read: IOPS=720, BW=180MiB/s (189MB/s)(1814MiB/10070msec) 00:49:00.167 slat (usec): min=11, max=62761, avg=913.20, stdev=3320.80 00:49:00.167 clat (usec): min=841, max=202794, avg=87849.48, stdev=39396.66 00:49:00.167 lat (usec): min=869, max=202848, avg=88762.68, stdev=39862.90 00:49:00.167 clat percentiles (msec): 00:49:00.167 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 29], 20.00th=[ 52], 00:49:00.167 | 30.00th=[ 70], 40.00th=[ 81], 50.00th=[ 93], 60.00th=[ 106], 00:49:00.167 | 70.00th=[ 114], 80.00th=[ 123], 90.00th=[ 136], 95.00th=[ 142], 00:49:00.167 | 99.00th=[ 165], 99.50th=[ 186], 99.90th=[ 197], 99.95th=[ 199], 00:49:00.167 | 99.99th=[ 203] 00:49:00.167 bw ( KiB/s): min=116224, max=374272, per=8.67%, avg=184105.05, stdev=57957.36, samples=20 00:49:00.167 iops : min= 454, max= 1462, avg=719.15, stdev=226.40, samples=20 00:49:00.167 lat (usec) : 1000=0.01% 00:49:00.167 lat (msec) : 2=0.30%, 4=0.63%, 10=2.40%, 20=3.05%, 50=13.19% 00:49:00.167 lat (msec) : 100=36.04%, 250=44.38% 00:49:00.167 cpu : usr=0.16%, sys=2.79%, ctx=1823, majf=0, minf=4097 00:49:00.167 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:49:00.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:00.167 issued rwts: total=7254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:00.167 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:00.167 job5: (groupid=0, jobs=1): err= 0: pid=2259543: Tue Jun 11 03:45:39 2024 00:49:00.167 read: IOPS=663, BW=166MiB/s (174MB/s)(1671MiB/10072msec) 00:49:00.167 slat (usec): min=9, max=60982, avg=1183.95, stdev=4033.13 00:49:00.167 clat (usec): min=1052, max=235735, avg=95172.54, stdev=44583.48 00:49:00.167 lat (usec): min=1083, max=241820, avg=96356.49, stdev=45306.92 00:49:00.167 clat percentiles (msec): 00:49:00.167 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 24], 20.00th=[ 55], 00:49:00.167 | 30.00th=[ 77], 40.00th=[ 91], 50.00th=[ 104], 60.00th=[ 114], 00:49:00.167 | 70.00th=[ 123], 80.00th=[ 133], 90.00th=[ 146], 95.00th=[ 159], 00:49:00.167 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 211], 99.95th=[ 224], 00:49:00.167 | 99.99th=[ 236] 00:49:00.167 bw ( KiB/s): min=96768, max=361472, per=7.98%, avg=169485.65, stdev=65035.78, samples=20 00:49:00.167 iops : min= 378, max= 1412, avg=662.05, stdev=254.05, samples=20 00:49:00.167 lat (msec) : 2=0.52%, 4=0.79%, 10=2.75%, 20=4.08%, 50=10.26% 00:49:00.167 lat (msec) : 100=29.19%, 250=52.39% 00:49:00.167 cpu : usr=0.27%, sys=2.37%, ctx=1609, majf=0, minf=4097 00:49:00.167 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:49:00.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:00.167 issued rwts: total=6683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:00.167 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:00.167 job6: (groupid=0, jobs=1): err= 0: pid=2259544: Tue Jun 11 03:45:39 2024 00:49:00.167 read: IOPS=712, BW=178MiB/s (187MB/s)(1791MiB/10047msec) 00:49:00.167 slat (usec): min=8, max=100958, avg=1004.31, stdev=3896.60 00:49:00.167 clat (usec): min=1518, max=245164, avg=88677.59, stdev=46243.53 00:49:00.167 lat (usec): min=1553, max=299911, avg=89681.90, stdev=46770.27 00:49:00.167 clat percentiles (msec): 00:49:00.167 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 19], 20.00th=[ 44], 00:49:00.167 | 30.00th=[ 62], 40.00th=[ 83], 50.00th=[ 96], 60.00th=[ 106], 00:49:00.167 | 70.00th=[ 115], 80.00th=[ 127], 90.00th=[ 140], 95.00th=[ 157], 00:49:00.167 | 99.00th=[ 207], 99.50th=[ 211], 99.90th=[ 224], 99.95th=[ 226], 00:49:00.167 | 99.99th=[ 245] 00:49:00.168 bw ( KiB/s): min=109056, max=350720, per=8.56%, avg=181760.00, stdev=65472.18, samples=20 00:49:00.168 iops : min= 426, max= 1370, avg=710.00, stdev=255.75, samples=20 00:49:00.168 lat (msec) : 2=0.07%, 4=0.77%, 10=4.55%, 20=5.21%, 50=11.73% 00:49:00.168 lat (msec) : 100=31.87%, 250=45.80% 00:49:00.168 cpu : usr=0.24%, sys=2.55%, ctx=1699, majf=0, minf=4097 00:49:00.168 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:49:00.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:00.168 issued rwts: total=7163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:00.168 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:00.168 job7: (groupid=0, jobs=1): err= 0: pid=2259545: Tue Jun 11 03:45:39 2024 00:49:00.168 read: IOPS=689, BW=172MiB/s (181MB/s)(1729MiB/10025msec) 00:49:00.168 slat (usec): min=14, max=150658, avg=1338.69, stdev=4261.56 00:49:00.168 clat (msec): min=10, max=303, avg=91.36, stdev=34.50 00:49:00.168 lat (msec): min=10, max=303, avg=92.70, stdev=34.93 00:49:00.168 clat percentiles (msec): 00:49:00.168 | 1.00th=[ 25], 5.00th=[ 30], 10.00th=[ 44], 20.00th=[ 64], 00:49:00.168 | 30.00th=[ 78], 40.00th=[ 84], 50.00th=[ 92], 60.00th=[ 100], 00:49:00.168 | 70.00th=[ 109], 80.00th=[ 118], 90.00th=[ 131], 95.00th=[ 146], 00:49:00.168 | 99.00th=[ 186], 99.50th=[ 201], 99.90th=[ 215], 99.95th=[ 215], 00:49:00.168 | 99.99th=[ 305] 00:49:00.168 bw ( KiB/s): min=128000, max=341504, per=8.26%, avg=175436.80, stdev=48409.06, samples=20 00:49:00.168 iops : min= 500, max= 1334, avg=685.30, stdev=189.10, samples=20 00:49:00.168 lat (msec) : 20=0.55%, 50=11.71%, 100=48.84%, 250=38.88%, 500=0.01% 00:49:00.168 cpu : usr=0.28%, sys=2.87%, ctx=1397, majf=0, minf=4097 00:49:00.168 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:49:00.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:00.168 issued rwts: total=6916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:00.168 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:00.168 job8: (groupid=0, jobs=1): err= 0: pid=2259546: Tue Jun 11 03:45:39 2024 00:49:00.168 read: IOPS=641, BW=160MiB/s (168MB/s)(1615MiB/10078msec) 00:49:00.168 slat (usec): min=9, max=117654, avg=1012.08, stdev=4145.76 00:49:00.168 clat (usec): min=1889, max=246350, avg=98741.29, stdev=41228.28 00:49:00.168 lat (usec): min=1919, max=274084, avg=99753.37, stdev=41745.00 00:49:00.168 clat percentiles (msec): 00:49:00.168 | 1.00th=[ 5], 5.00th=[ 18], 10.00th=[ 45], 20.00th=[ 65], 00:49:00.168 | 30.00th=[ 81], 40.00th=[ 93], 50.00th=[ 103], 60.00th=[ 111], 00:49:00.168 | 70.00th=[ 121], 80.00th=[ 131], 90.00th=[ 148], 95.00th=[ 161], 00:49:00.168 | 99.00th=[ 205], 99.50th=[ 207], 99.90th=[ 226], 99.95th=[ 230], 00:49:00.168 | 99.99th=[ 247] 00:49:00.168 bw ( KiB/s): min=107520, max=273408, per=7.71%, avg=163763.20, stdev=43636.12, samples=20 00:49:00.168 iops : min= 420, max= 1068, avg=639.70, stdev=170.45, samples=20 00:49:00.168 lat (msec) : 2=0.09%, 4=0.80%, 10=2.29%, 20=2.20%, 50=6.35% 00:49:00.168 lat (msec) : 100=35.63%, 250=52.63% 00:49:00.168 cpu : usr=0.17%, sys=2.26%, ctx=1594, majf=0, minf=4097 00:49:00.168 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:49:00.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:00.168 issued rwts: total=6460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:00.168 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:00.168 job9: (groupid=0, jobs=1): err= 0: pid=2259547: Tue Jun 11 03:45:39 2024 00:49:00.168 read: IOPS=794, BW=199MiB/s (208MB/s)(1988MiB/10014msec) 00:49:00.168 slat (usec): min=9, max=60773, avg=1079.90, stdev=3467.34 00:49:00.168 clat (usec): min=986, max=219412, avg=79433.94, stdev=46041.58 00:49:00.168 lat (usec): min=1016, max=219451, avg=80513.83, stdev=46566.53 00:49:00.168 clat percentiles (msec): 00:49:00.168 | 1.00th=[ 5], 5.00th=[ 20], 10.00th=[ 27], 20.00th=[ 29], 00:49:00.168 | 30.00th=[ 42], 40.00th=[ 61], 50.00th=[ 80], 60.00th=[ 97], 00:49:00.168 | 70.00th=[ 113], 80.00th=[ 122], 90.00th=[ 138], 95.00th=[ 157], 00:49:00.168 | 99.00th=[ 190], 99.50th=[ 201], 99.90th=[ 213], 99.95th=[ 213], 00:49:00.168 | 99.99th=[ 220] 00:49:00.168 bw ( KiB/s): min=109056, max=530944, per=9.51%, avg=201958.40, stdev=104595.24, samples=20 00:49:00.168 iops : min= 426, max= 2074, avg=788.90, stdev=408.58, samples=20 00:49:00.168 lat (usec) : 1000=0.03% 00:49:00.168 lat (msec) : 2=0.23%, 4=0.74%, 10=2.31%, 20=1.75%, 50=30.50% 00:49:00.168 lat (msec) : 100=26.06%, 250=38.39% 00:49:00.168 cpu : usr=0.27%, sys=3.03%, ctx=1705, majf=0, minf=4097 00:49:00.168 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:49:00.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:00.168 issued rwts: total=7952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:00.168 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:00.168 job10: (groupid=0, jobs=1): err= 0: pid=2259548: Tue Jun 11 03:45:39 2024 00:49:00.168 read: IOPS=1120, BW=280MiB/s (294MB/s)(2821MiB/10069msec) 00:49:00.168 slat (usec): min=9, max=70326, avg=675.14, stdev=2597.86 00:49:00.168 clat (usec): min=1112, max=201334, avg=56372.76, stdev=41343.78 00:49:00.168 lat (usec): min=1152, max=226243, avg=57047.90, stdev=41815.09 00:49:00.168 clat percentiles (msec): 00:49:00.168 | 1.00th=[ 4], 5.00th=[ 18], 10.00th=[ 23], 20.00th=[ 25], 00:49:00.168 | 30.00th=[ 26], 40.00th=[ 28], 50.00th=[ 35], 60.00th=[ 53], 00:49:00.168 | 70.00th=[ 77], 80.00th=[ 96], 90.00th=[ 126], 95.00th=[ 138], 00:49:00.168 | 99.00th=[ 159], 99.50th=[ 171], 99.90th=[ 192], 99.95th=[ 194], 00:49:00.168 | 99.99th=[ 197] 00:49:00.168 bw ( KiB/s): min=121344, max=643584, per=13.53%, avg=287257.60, stdev=180158.57, samples=20 00:49:00.168 iops : min= 474, max= 2514, avg=1122.10, stdev=703.74, samples=20 00:49:00.168 lat (msec) : 2=0.21%, 4=0.84%, 10=1.87%, 20=3.14%, 50=52.38% 00:49:00.168 lat (msec) : 100=22.83%, 250=18.73% 00:49:00.168 cpu : usr=0.41%, sys=3.69%, ctx=2338, majf=0, minf=4097 00:49:00.168 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:49:00.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:00.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:00.168 issued rwts: total=11285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:00.168 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:00.168 00:49:00.168 Run status group 0 (all jobs): 00:49:00.168 READ: bw=2073MiB/s (2174MB/s), 160MiB/s-280MiB/s (168MB/s-294MB/s), io=20.4GiB (21.9GB), run=10014-10083msec 00:49:00.168 00:49:00.168 Disk stats (read/write): 00:49:00.168 nvme0n1: ios=13176/0, merge=0/0, ticks=1243178/0, in_queue=1243178, util=97.36% 00:49:00.168 nvme10n1: ios=16740/0, merge=0/0, ticks=1244077/0, in_queue=1244077, util=97.52% 00:49:00.168 nvme1n1: ios=13893/0, merge=0/0, ticks=1241982/0, in_queue=1241982, util=97.79% 00:49:00.168 nvme2n1: ios=15180/0, merge=0/0, ticks=1240437/0, in_queue=1240437, util=97.92% 00:49:00.168 nvme3n1: ios=14341/0, merge=0/0, ticks=1241662/0, in_queue=1241662, util=98.02% 00:49:00.168 nvme4n1: ios=13171/0, merge=0/0, ticks=1235230/0, in_queue=1235230, util=98.33% 00:49:00.168 nvme5n1: ios=14068/0, merge=0/0, ticks=1239946/0, in_queue=1239946, util=98.50% 00:49:00.168 nvme6n1: ios=13641/0, merge=0/0, ticks=1237632/0, in_queue=1237632, util=98.60% 00:49:00.168 nvme7n1: ios=12706/0, merge=0/0, ticks=1241333/0, in_queue=1241333, util=98.97% 00:49:00.168 nvme8n1: ios=15561/0, merge=0/0, ticks=1238611/0, in_queue=1238611, util=99.14% 00:49:00.168 nvme9n1: ios=22416/0, merge=0/0, ticks=1241251/0, in_queue=1241251, util=99.26% 00:49:00.168 03:45:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:49:00.168 [global] 00:49:00.168 thread=1 00:49:00.168 invalidate=1 00:49:00.168 rw=randwrite 00:49:00.168 time_based=1 00:49:00.168 runtime=10 00:49:00.168 ioengine=libaio 00:49:00.168 direct=1 00:49:00.168 bs=262144 00:49:00.168 iodepth=64 00:49:00.168 norandommap=1 00:49:00.168 numjobs=1 00:49:00.168 00:49:00.168 [job0] 00:49:00.168 filename=/dev/nvme0n1 00:49:00.168 [job1] 00:49:00.168 filename=/dev/nvme10n1 00:49:00.168 [job2] 00:49:00.168 filename=/dev/nvme1n1 00:49:00.168 [job3] 00:49:00.168 filename=/dev/nvme2n1 00:49:00.168 [job4] 00:49:00.168 filename=/dev/nvme3n1 00:49:00.168 [job5] 00:49:00.168 filename=/dev/nvme4n1 00:49:00.168 [job6] 00:49:00.168 filename=/dev/nvme5n1 00:49:00.168 [job7] 00:49:00.168 filename=/dev/nvme6n1 00:49:00.168 [job8] 00:49:00.168 filename=/dev/nvme7n1 00:49:00.168 [job9] 00:49:00.168 filename=/dev/nvme8n1 00:49:00.168 [job10] 00:49:00.168 filename=/dev/nvme9n1 00:49:00.168 Could not set queue depth (nvme0n1) 00:49:00.168 Could not set queue depth (nvme10n1) 00:49:00.168 Could not set queue depth (nvme1n1) 00:49:00.168 Could not set queue depth (nvme2n1) 00:49:00.168 Could not set queue depth (nvme3n1) 00:49:00.168 Could not set queue depth (nvme4n1) 00:49:00.168 Could not set queue depth (nvme5n1) 00:49:00.168 Could not set queue depth (nvme6n1) 00:49:00.168 Could not set queue depth (nvme7n1) 00:49:00.168 Could not set queue depth (nvme8n1) 00:49:00.168 Could not set queue depth (nvme9n1) 00:49:00.168 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:49:00.168 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:49:00.168 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:49:00.168 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:49:00.168 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:49:00.168 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:49:00.168 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:49:00.168 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:49:00.168 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:49:00.168 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:49:00.168 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:49:00.168 fio-3.35 00:49:00.168 Starting 11 threads 00:49:10.148 00:49:10.148 job0: (groupid=0, jobs=1): err= 0: pid=2261078: Tue Jun 11 03:45:50 2024 00:49:10.148 write: IOPS=701, BW=175MiB/s (184MB/s)(1782MiB/10166msec); 0 zone resets 00:49:10.148 slat (usec): min=24, max=34201, avg=1039.29, stdev=2793.94 00:49:10.148 clat (usec): min=1454, max=404828, avg=90192.80, stdev=58110.28 00:49:10.148 lat (msec): min=2, max=404, avg=91.23, stdev=58.91 00:49:10.148 clat percentiles (msec): 00:49:10.148 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 26], 20.00th=[ 40], 00:49:10.148 | 30.00th=[ 50], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 85], 00:49:10.148 | 70.00th=[ 109], 80.00th=[ 136], 90.00th=[ 184], 95.00th=[ 194], 00:49:10.148 | 99.00th=[ 234], 99.50th=[ 264], 99.90th=[ 372], 99.95th=[ 388], 00:49:10.148 | 99.99th=[ 405] 00:49:10.148 bw ( KiB/s): min=77824, max=349696, per=11.21%, avg=180864.00, stdev=88939.03, samples=20 00:49:10.148 iops : min= 304, max= 1366, avg=706.50, stdev=347.42, samples=20 00:49:10.148 lat (msec) : 2=0.04%, 4=0.76%, 10=2.23%, 20=4.92%, 50=22.21% 00:49:10.148 lat (msec) : 100=35.68%, 250=33.49%, 500=0.67% 00:49:10.148 cpu : usr=1.60%, sys=2.11%, ctx=3847, majf=0, minf=1 00:49:10.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:49:10.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:10.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:10.148 issued rwts: total=0,7128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:10.148 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:10.148 job1: (groupid=0, jobs=1): err= 0: pid=2261090: Tue Jun 11 03:45:50 2024 00:49:10.148 write: IOPS=442, BW=111MiB/s (116MB/s)(1125MiB/10165msec); 0 zone resets 00:49:10.148 slat (usec): min=30, max=112056, avg=1902.16, stdev=4988.54 00:49:10.148 clat (usec): min=1380, max=409232, avg=142578.13, stdev=62933.24 00:49:10.148 lat (usec): min=1449, max=409277, avg=144480.29, stdev=63795.58 00:49:10.148 clat percentiles (msec): 00:49:10.148 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 37], 20.00th=[ 94], 00:49:10.148 | 30.00th=[ 128], 40.00th=[ 138], 50.00th=[ 146], 60.00th=[ 169], 00:49:10.148 | 70.00th=[ 186], 80.00th=[ 197], 90.00th=[ 205], 95.00th=[ 218], 00:49:10.148 | 99.00th=[ 257], 99.50th=[ 305], 99.90th=[ 393], 99.95th=[ 393], 00:49:10.148 | 99.99th=[ 409] 00:49:10.148 bw ( KiB/s): min=70144, max=214016, per=7.04%, avg=113572.65, stdev=36910.45, samples=20 00:49:10.148 iops : min= 274, max= 836, avg=443.60, stdev=144.18, samples=20 00:49:10.148 lat (msec) : 2=0.02%, 4=0.31%, 10=1.82%, 20=3.60%, 50=7.33% 00:49:10.148 lat (msec) : 100=7.62%, 250=77.60%, 500=1.69% 00:49:10.148 cpu : usr=0.89%, sys=1.51%, ctx=2026, majf=0, minf=1 00:49:10.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:49:10.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:10.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:10.148 issued rwts: total=0,4499,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:10.148 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:10.148 job2: (groupid=0, jobs=1): err= 0: pid=2261091: Tue Jun 11 03:45:50 2024 00:49:10.148 write: IOPS=724, BW=181MiB/s (190MB/s)(1823MiB/10073msec); 0 zone resets 00:49:10.148 slat (usec): min=25, max=38355, avg=1291.96, stdev=2557.05 00:49:10.148 clat (msec): min=2, max=178, avg=87.04, stdev=32.80 00:49:10.148 lat (msec): min=2, max=178, avg=88.33, stdev=33.19 00:49:10.148 clat percentiles (msec): 00:49:10.148 | 1.00th=[ 22], 5.00th=[ 41], 10.00th=[ 43], 20.00th=[ 56], 00:49:10.148 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 80], 60.00th=[ 94], 00:49:10.148 | 70.00th=[ 108], 80.00th=[ 117], 90.00th=[ 136], 95.00th=[ 144], 00:49:10.148 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 174], 99.95th=[ 176], 00:49:10.148 | 99.99th=[ 178] 00:49:10.148 bw ( KiB/s): min=114688, max=300633, per=11.48%, avg=185118.05, stdev=53573.92, samples=20 00:49:10.148 iops : min= 448, max= 1174, avg=723.10, stdev=209.23, samples=20 00:49:10.148 lat (msec) : 4=0.01%, 10=0.19%, 20=0.71%, 50=13.70%, 100=46.87% 00:49:10.148 lat (msec) : 250=38.52% 00:49:10.148 cpu : usr=1.78%, sys=2.42%, ctx=2187, majf=0, minf=1 00:49:10.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:49:10.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:10.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:10.148 issued rwts: total=0,7293,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:10.148 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:10.148 job3: (groupid=0, jobs=1): err= 0: pid=2261092: Tue Jun 11 03:45:50 2024 00:49:10.148 write: IOPS=446, BW=112MiB/s (117MB/s)(1136MiB/10166msec); 0 zone resets 00:49:10.148 slat (usec): min=21, max=42477, avg=2030.83, stdev=4298.81 00:49:10.148 clat (msec): min=2, max=432, avg=141.15, stdev=56.54 00:49:10.148 lat (msec): min=3, max=432, avg=143.19, stdev=57.35 00:49:10.148 clat percentiles (msec): 00:49:10.148 | 1.00th=[ 22], 5.00th=[ 44], 10.00th=[ 66], 20.00th=[ 101], 00:49:10.148 | 30.00th=[ 108], 40.00th=[ 121], 50.00th=[ 133], 60.00th=[ 163], 00:49:10.148 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 203], 95.00th=[ 211], 00:49:10.148 | 99.00th=[ 275], 99.50th=[ 334], 99.90th=[ 418], 99.95th=[ 418], 00:49:10.148 | 99.99th=[ 435] 00:49:10.148 bw ( KiB/s): min=64000, max=244224, per=7.11%, avg=114662.40, stdev=44074.43, samples=20 00:49:10.148 iops : min= 250, max= 954, avg=447.90, stdev=172.17, samples=20 00:49:10.148 lat (msec) : 4=0.04%, 10=0.31%, 20=0.59%, 50=5.48%, 100=13.78% 00:49:10.148 lat (msec) : 250=77.81%, 500=1.98% 00:49:10.148 cpu : usr=1.16%, sys=1.41%, ctx=1571, majf=0, minf=1 00:49:10.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:49:10.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:10.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:10.148 issued rwts: total=0,4542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:10.148 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:10.148 job4: (groupid=0, jobs=1): err= 0: pid=2261093: Tue Jun 11 03:45:50 2024 00:49:10.148 write: IOPS=588, BW=147MiB/s (154MB/s)(1482MiB/10075msec); 0 zone resets 00:49:10.148 slat (usec): min=26, max=66181, avg=1377.53, stdev=3478.25 00:49:10.148 clat (usec): min=1312, max=241671, avg=107309.68, stdev=50076.77 00:49:10.148 lat (usec): min=1366, max=243100, avg=108687.22, stdev=50791.53 00:49:10.148 clat percentiles (msec): 00:49:10.148 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 50], 20.00th=[ 74], 00:49:10.148 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 97], 60.00th=[ 118], 00:49:10.148 | 70.00th=[ 132], 80.00th=[ 150], 90.00th=[ 184], 95.00th=[ 194], 00:49:10.148 | 99.00th=[ 226], 99.50th=[ 234], 99.90th=[ 243], 99.95th=[ 243], 00:49:10.148 | 99.99th=[ 243] 00:49:10.148 bw ( KiB/s): min=86016, max=232448, per=9.31%, avg=150155.90, stdev=46836.85, samples=20 00:49:10.148 iops : min= 336, max= 908, avg=586.50, stdev=182.99, samples=20 00:49:10.148 lat (msec) : 2=0.05%, 4=0.22%, 10=1.82%, 20=1.55%, 50=6.49% 00:49:10.148 lat (msec) : 100=41.60%, 250=48.26% 00:49:10.148 cpu : usr=1.44%, sys=1.63%, ctx=2730, majf=0, minf=1 00:49:10.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:49:10.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:10.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:10.148 issued rwts: total=0,5928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:10.148 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:10.148 job5: (groupid=0, jobs=1): err= 0: pid=2261094: Tue Jun 11 03:45:50 2024 00:49:10.148 write: IOPS=562, BW=141MiB/s (147MB/s)(1412MiB/10039msec); 0 zone resets 00:49:10.148 slat (usec): min=30, max=86906, avg=1540.51, stdev=3614.61 00:49:10.148 clat (usec): min=1049, max=268086, avg=111973.08, stdev=49044.34 00:49:10.148 lat (usec): min=1143, max=271804, avg=113513.59, stdev=49672.70 00:49:10.148 clat percentiles (msec): 00:49:10.148 | 1.00th=[ 4], 5.00th=[ 30], 10.00th=[ 39], 20.00th=[ 74], 00:49:10.148 | 30.00th=[ 101], 40.00th=[ 107], 50.00th=[ 111], 60.00th=[ 125], 00:49:10.148 | 70.00th=[ 132], 80.00th=[ 142], 90.00th=[ 178], 95.00th=[ 203], 00:49:10.148 | 99.00th=[ 232], 99.50th=[ 236], 99.90th=[ 257], 99.95th=[ 264], 00:49:10.148 | 99.99th=[ 268] 00:49:10.148 bw ( KiB/s): min=73728, max=280576, per=8.86%, avg=142924.80, stdev=42044.51, samples=20 00:49:10.148 iops : min= 288, max= 1096, avg=558.30, stdev=164.24, samples=20 00:49:10.148 lat (msec) : 2=0.18%, 4=0.94%, 10=0.37%, 20=0.51%, 50=13.97% 00:49:10.148 lat (msec) : 100=13.12%, 250=70.72%, 500=0.18% 00:49:10.148 cpu : usr=1.76%, sys=1.74%, ctx=2355, majf=0, minf=1 00:49:10.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:49:10.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:10.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:10.148 issued rwts: total=0,5646,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:10.148 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:10.148 job6: (groupid=0, jobs=1): err= 0: pid=2261095: Tue Jun 11 03:45:50 2024 00:49:10.148 write: IOPS=439, BW=110MiB/s (115MB/s)(1116MiB/10166msec); 0 zone resets 00:49:10.148 slat (usec): min=24, max=64411, avg=1978.95, stdev=4489.16 00:49:10.148 clat (usec): min=1405, max=428400, avg=143614.00, stdev=60901.72 00:49:10.148 lat (usec): min=1944, max=428445, avg=145592.96, stdev=61808.84 00:49:10.148 clat percentiles (msec): 00:49:10.148 | 1.00th=[ 5], 5.00th=[ 21], 10.00th=[ 51], 20.00th=[ 102], 00:49:10.148 | 30.00th=[ 124], 40.00th=[ 136], 50.00th=[ 146], 60.00th=[ 174], 00:49:10.148 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 205], 95.00th=[ 218], 00:49:10.148 | 99.00th=[ 271], 99.50th=[ 330], 99.90th=[ 409], 99.95th=[ 409], 00:49:10.148 | 99.99th=[ 430] 00:49:10.148 bw ( KiB/s): min=66048, max=187392, per=6.99%, avg=112691.20, stdev=33261.92, samples=20 00:49:10.148 iops : min= 258, max= 732, avg=440.20, stdev=129.93, samples=20 00:49:10.148 lat (msec) : 2=0.07%, 4=0.60%, 10=1.70%, 20=2.60%, 50=5.04% 00:49:10.148 lat (msec) : 100=9.27%, 250=79.24%, 500=1.48% 00:49:10.148 cpu : usr=1.02%, sys=1.37%, ctx=1913, majf=0, minf=1 00:49:10.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:49:10.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:10.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:10.149 issued rwts: total=0,4465,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:10.149 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:10.149 job7: (groupid=0, jobs=1): err= 0: pid=2261096: Tue Jun 11 03:45:50 2024 00:49:10.149 write: IOPS=603, BW=151MiB/s (158MB/s)(1534MiB/10167msec); 0 zone resets 00:49:10.149 slat (usec): min=23, max=37613, avg=1198.15, stdev=2981.87 00:49:10.149 clat (usec): min=1325, max=409060, avg=104739.39, stdev=52768.58 00:49:10.149 lat (usec): min=1365, max=409108, avg=105937.54, stdev=53441.83 00:49:10.149 clat percentiles (msec): 00:49:10.149 | 1.00th=[ 8], 5.00th=[ 22], 10.00th=[ 41], 20.00th=[ 68], 00:49:10.149 | 30.00th=[ 78], 40.00th=[ 81], 50.00th=[ 101], 60.00th=[ 109], 00:49:10.149 | 70.00th=[ 126], 80.00th=[ 146], 90.00th=[ 178], 95.00th=[ 184], 00:49:10.149 | 99.00th=[ 251], 99.50th=[ 284], 99.90th=[ 376], 99.95th=[ 393], 00:49:10.149 | 99.99th=[ 409] 00:49:10.149 bw ( KiB/s): min=69632, max=244224, per=9.64%, avg=155482.45, stdev=52778.20, samples=20 00:49:10.149 iops : min= 272, max= 954, avg=607.35, stdev=206.17, samples=20 00:49:10.149 lat (msec) : 2=0.10%, 4=0.18%, 10=1.42%, 20=2.92%, 50=8.08% 00:49:10.149 lat (msec) : 100=36.65%, 250=49.71%, 500=0.95% 00:49:10.149 cpu : usr=1.43%, sys=1.82%, ctx=3255, majf=0, minf=1 00:49:10.149 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:49:10.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:10.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:10.149 issued rwts: total=0,6137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:10.149 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:10.149 job8: (groupid=0, jobs=1): err= 0: pid=2261097: Tue Jun 11 03:45:50 2024 00:49:10.149 write: IOPS=520, BW=130MiB/s (137MB/s)(1321MiB/10147msec); 0 zone resets 00:49:10.149 slat (usec): min=28, max=65215, avg=1806.13, stdev=4060.55 00:49:10.149 clat (msec): min=6, max=387, avg=120.97, stdev=66.08 00:49:10.149 lat (msec): min=6, max=387, avg=122.77, stdev=66.96 00:49:10.149 clat percentiles (msec): 00:49:10.149 | 1.00th=[ 17], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 44], 00:49:10.149 | 30.00th=[ 61], 40.00th=[ 103], 50.00th=[ 117], 60.00th=[ 140], 00:49:10.149 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 199], 95.00th=[ 220], 00:49:10.149 | 99.00th=[ 255], 99.50th=[ 292], 99.90th=[ 372], 99.95th=[ 388], 00:49:10.149 | 99.99th=[ 388] 00:49:10.149 bw ( KiB/s): min=71680, max=367104, per=8.29%, avg=133697.20, stdev=81170.39, samples=20 00:49:10.149 iops : min= 280, max= 1434, avg=522.25, stdev=317.07, samples=20 00:49:10.149 lat (msec) : 10=0.13%, 20=1.15%, 50=24.56%, 100=12.64%, 250=59.98% 00:49:10.149 lat (msec) : 500=1.53% 00:49:10.149 cpu : usr=1.48%, sys=1.64%, ctx=1618, majf=0, minf=1 00:49:10.149 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:49:10.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:10.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:10.149 issued rwts: total=0,5285,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:10.149 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:10.149 job9: (groupid=0, jobs=1): err= 0: pid=2261098: Tue Jun 11 03:45:50 2024 00:49:10.149 write: IOPS=600, BW=150MiB/s (157MB/s)(1510MiB/10059msec); 0 zone resets 00:49:10.149 slat (usec): min=21, max=23873, avg=1464.78, stdev=2933.71 00:49:10.149 clat (msec): min=2, max=257, avg=105.09, stdev=35.65 00:49:10.149 lat (msec): min=2, max=259, avg=106.56, stdev=36.09 00:49:10.149 clat percentiles (msec): 00:49:10.149 | 1.00th=[ 7], 5.00th=[ 22], 10.00th=[ 56], 20.00th=[ 84], 00:49:10.149 | 30.00th=[ 102], 40.00th=[ 107], 50.00th=[ 109], 60.00th=[ 112], 00:49:10.149 | 70.00th=[ 123], 80.00th=[ 131], 90.00th=[ 138], 95.00th=[ 148], 00:49:10.149 | 99.00th=[ 184], 99.50th=[ 236], 99.90th=[ 251], 99.95th=[ 253], 00:49:10.149 | 99.99th=[ 257] 00:49:10.149 bw ( KiB/s): min=118784, max=240128, per=9.49%, avg=153011.20, stdev=29121.42, samples=20 00:49:10.149 iops : min= 464, max= 938, avg=597.70, stdev=113.76, samples=20 00:49:10.149 lat (msec) : 4=0.13%, 10=1.82%, 20=2.53%, 50=4.52%, 100=17.42% 00:49:10.149 lat (msec) : 250=73.46%, 500=0.12% 00:49:10.149 cpu : usr=1.39%, sys=2.00%, ctx=2325, majf=0, minf=1 00:49:10.149 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:49:10.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:10.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:10.149 issued rwts: total=0,6040,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:10.149 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:10.149 job10: (groupid=0, jobs=1): err= 0: pid=2261099: Tue Jun 11 03:45:50 2024 00:49:10.149 write: IOPS=702, BW=176MiB/s (184MB/s)(1773MiB/10093msec); 0 zone resets 00:49:10.149 slat (usec): min=24, max=110561, avg=1130.40, stdev=3136.82 00:49:10.149 clat (usec): min=1532, max=357871, avg=89715.04, stdev=47201.26 00:49:10.149 lat (usec): min=1568, max=357971, avg=90845.43, stdev=47795.80 00:49:10.149 clat percentiles (msec): 00:49:10.149 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 18], 20.00th=[ 41], 00:49:10.149 | 30.00th=[ 71], 40.00th=[ 85], 50.00th=[ 102], 60.00th=[ 107], 00:49:10.149 | 70.00th=[ 110], 80.00th=[ 127], 90.00th=[ 144], 95.00th=[ 153], 00:49:10.149 | 99.00th=[ 211], 99.50th=[ 236], 99.90th=[ 351], 99.95th=[ 355], 00:49:10.149 | 99.99th=[ 359] 00:49:10.149 bw ( KiB/s): min=113664, max=287744, per=11.16%, avg=179942.40, stdev=43546.61, samples=20 00:49:10.149 iops : min= 444, max= 1124, avg=702.90, stdev=170.10, samples=20 00:49:10.149 lat (msec) : 2=0.08%, 4=0.62%, 10=3.64%, 20=7.33%, 50=11.63% 00:49:10.149 lat (msec) : 100=23.08%, 250=53.27%, 500=0.34% 00:49:10.149 cpu : usr=1.43%, sys=2.31%, ctx=3662, majf=0, minf=1 00:49:10.149 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:49:10.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:49:10.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:49:10.149 issued rwts: total=0,7092,0,0 short=0,0,0,0 dropped=0,0,0,0 00:49:10.149 latency : target=0, window=0, percentile=100.00%, depth=64 00:49:10.149 00:49:10.149 Run status group 0 (all jobs): 00:49:10.149 WRITE: bw=1575MiB/s (1652MB/s), 110MiB/s-181MiB/s (115MB/s-190MB/s), io=15.6GiB (16.8GB), run=10039-10167msec 00:49:10.149 00:49:10.149 Disk stats (read/write): 00:49:10.149 nvme0n1: ios=49/14093, merge=0/0, ticks=34/1211165, in_queue=1211199, util=97.38% 00:49:10.149 nvme10n1: ios=50/8831, merge=0/0, ticks=2315/1175866, in_queue=1178181, util=100.00% 00:49:10.149 nvme1n1: ios=46/14339, merge=0/0, ticks=1106/1208914, in_queue=1210020, util=100.00% 00:49:10.149 nvme2n1: ios=0/8932, merge=0/0, ticks=0/1201822, in_queue=1201822, util=97.82% 00:49:10.149 nvme3n1: ios=44/11605, merge=0/0, ticks=1018/1213766, in_queue=1214784, util=100.00% 00:49:10.149 nvme4n1: ios=49/10926, merge=0/0, ticks=611/1215017, in_queue=1215628, util=100.00% 00:49:10.149 nvme5n1: ios=43/8769, merge=0/0, ticks=1058/1199319, in_queue=1200377, util=100.00% 00:49:10.149 nvme6n1: ios=45/12111, merge=0/0, ticks=1731/1211062, in_queue=1212793, util=100.00% 00:49:10.149 nvme7n1: ios=41/10377, merge=0/0, ticks=1129/1196132, in_queue=1197261, util=100.00% 00:49:10.149 nvme8n1: ios=0/11793, merge=0/0, ticks=0/1210813, in_queue=1210813, util=98.94% 00:49:10.149 nvme9n1: ios=49/13886, merge=0/0, ticks=1864/1195471, in_queue=1197335, util=100.00% 00:49:10.149 03:45:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:49:10.149 03:45:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:49:10.149 03:45:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:49:10.149 03:45:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:49:10.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK1 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK1 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:49:10.149 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:49:10.149 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK2 00:49:10.408 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK2 00:49:10.408 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:49:10.408 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:49:10.408 03:45:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:49:10.408 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:10.408 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:49:10.408 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:10.408 03:45:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:49:10.408 03:45:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:49:10.667 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:49:10.667 03:45:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:49:10.667 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:49:10.667 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:49:10.667 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK3 00:49:10.667 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:49:10.667 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK3 00:49:10.667 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:49:10.667 03:45:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:49:10.667 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:10.667 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:49:10.667 03:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:10.667 03:45:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:49:10.667 03:45:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:49:10.925 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:49:10.925 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:49:10.925 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:49:10.925 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:49:10.925 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK4 00:49:10.925 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:49:10.925 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK4 00:49:10.925 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:49:10.925 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:49:10.925 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:10.925 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:49:10.925 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:10.925 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:49:10.925 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:49:11.184 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:49:11.184 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:49:11.184 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:49:11.184 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:49:11.184 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK5 00:49:11.184 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:49:11.184 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK5 00:49:11.184 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:49:11.184 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:49:11.184 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:11.184 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:49:11.184 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:11.184 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:49:11.184 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:49:11.443 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK6 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK6 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:49:11.443 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK7 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK7 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:49:11.443 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK8 00:49:11.443 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:49:11.702 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK8 00:49:11.702 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:49:11.702 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:49:11.702 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:49:11.702 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:11.702 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:49:11.702 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:11.702 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:49:11.702 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:49:11.702 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:49:11.702 03:45:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:49:11.702 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:49:11.702 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:49:11.702 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK9 00:49:11.702 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK9 00:49:11.702 03:45:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:49:11.702 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:49:11.702 03:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:49:11.702 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:11.702 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:49:11.702 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:11.702 03:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:49:11.702 03:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:49:11.961 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK10 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK10 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:49:11.961 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK11 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK11 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:49:11.961 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:49:11.961 rmmod nvme_tcp 00:49:12.219 rmmod nvme_fabrics 00:49:12.219 rmmod nvme_keyring 00:49:12.219 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:49:12.219 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:49:12.219 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:49:12.219 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2252577 ']' 00:49:12.220 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2252577 00:49:12.220 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@949 -- # '[' -z 2252577 ']' 00:49:12.220 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # kill -0 2252577 00:49:12.220 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # uname 00:49:12.220 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:49:12.220 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2252577 00:49:12.220 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:49:12.220 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:49:12.220 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2252577' 00:49:12.220 killing process with pid 2252577 00:49:12.220 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@968 -- # kill 2252577 00:49:12.220 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@973 -- # wait 2252577 00:49:12.478 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:49:12.478 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:49:12.478 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:49:12.478 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:49:12.478 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:49:12.478 03:45:53 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:12.478 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:49:12.478 03:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:15.069 03:45:55 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:49:15.069 00:49:15.069 real 1m10.363s 00:49:15.069 user 4m8.657s 00:49:15.069 sys 0m24.502s 00:49:15.069 03:45:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # xtrace_disable 00:49:15.069 03:45:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:49:15.069 ************************************ 00:49:15.069 END TEST nvmf_multiconnection 00:49:15.069 ************************************ 00:49:15.069 03:45:55 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:49:15.069 03:45:55 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:49:15.069 03:45:55 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:49:15.069 03:45:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:49:15.069 ************************************ 00:49:15.069 START TEST nvmf_initiator_timeout 00:49:15.069 ************************************ 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:49:15.069 * Looking for test storage... 00:49:15.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:15.069 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:49:15.070 03:45:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:49:20.340 Found 0000:86:00.0 (0x8086 - 0x159b) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:49:20.340 Found 0000:86:00.1 (0x8086 - 0x159b) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:20.340 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:49:20.341 Found net devices under 0000:86:00.0: cvl_0_0 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:49:20.341 Found net devices under 0000:86:00.1: cvl_0_1 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:49:20.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:20.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:49:20.341 00:49:20.341 --- 10.0.0.2 ping statistics --- 00:49:20.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:20.341 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:49:20.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:20.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:49:20.341 00:49:20.341 --- 10.0.0.1 ping statistics --- 00:49:20.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:20.341 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@723 -- # xtrace_disable 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=2266590 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 2266590 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@830 -- # '[' -z 2266590 ']' 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local max_retries=100 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:20.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # xtrace_disable 00:49:20.341 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:20.341 [2024-06-11 03:46:01.715175] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:49:20.341 [2024-06-11 03:46:01.715216] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:20.341 EAL: No free 2048 kB hugepages reported on node 1 00:49:20.600 [2024-06-11 03:46:01.779468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:49:20.600 [2024-06-11 03:46:01.820499] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:20.600 [2024-06-11 03:46:01.820541] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:20.600 [2024-06-11 03:46:01.820548] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:20.600 [2024-06-11 03:46:01.820559] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:20.600 [2024-06-11 03:46:01.820564] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:20.600 [2024-06-11 03:46:01.820613] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:49:20.600 [2024-06-11 03:46:01.820710] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:49:20.600 [2024-06-11 03:46:01.820780] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:49:20.600 [2024-06-11 03:46:01.820780] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:49:20.600 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:49:20.600 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@863 -- # return 0 00:49:20.600 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:49:20.600 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@729 -- # xtrace_disable 00:49:20.600 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:20.600 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:20.600 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:49:20.600 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:49:20.600 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:20.600 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:20.600 Malloc0 00:49:20.600 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:20.601 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:49:20.601 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:20.601 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:20.601 Delay0 00:49:20.601 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:20.601 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:49:20.601 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:20.601 03:46:01 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:20.601 [2024-06-11 03:46:01.996336] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:20.601 03:46:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:20.601 03:46:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:49:20.601 03:46:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:20.601 03:46:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:20.860 03:46:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:20.860 03:46:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:49:20.860 03:46:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:20.860 03:46:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:20.860 03:46:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:20.860 03:46:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:20.860 03:46:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:20.860 03:46:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:20.860 [2024-06-11 03:46:02.021138] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:20.860 03:46:02 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:20.860 03:46:02 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:49:21.796 03:46:03 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:49:21.796 03:46:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1197 -- # local i=0 00:49:21.796 03:46:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:49:21.796 03:46:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:49:21.796 03:46:03 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # sleep 2 00:49:24.328 03:46:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:49:24.328 03:46:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:49:24.328 03:46:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:49:24.328 03:46:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:49:24.328 03:46:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:49:24.328 03:46:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # return 0 00:49:24.328 03:46:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2267293 00:49:24.328 03:46:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:49:24.328 03:46:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:49:24.328 [global] 00:49:24.328 thread=1 00:49:24.328 invalidate=1 00:49:24.328 rw=write 00:49:24.328 time_based=1 00:49:24.328 runtime=60 00:49:24.328 ioengine=libaio 00:49:24.328 direct=1 00:49:24.328 bs=4096 00:49:24.328 iodepth=1 00:49:24.328 norandommap=0 00:49:24.328 numjobs=1 00:49:24.328 00:49:24.328 verify_dump=1 00:49:24.328 verify_backlog=512 00:49:24.328 verify_state_save=0 00:49:24.328 do_verify=1 00:49:24.328 verify=crc32c-intel 00:49:24.328 [job0] 00:49:24.328 filename=/dev/nvme0n1 00:49:24.328 Could not set queue depth (nvme0n1) 00:49:24.328 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:49:24.328 fio-3.35 00:49:24.328 Starting 1 thread 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:26.860 true 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:26.860 true 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:26.860 true 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:26.860 true 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:26.860 03:46:08 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:30.145 true 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:30.145 true 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:30.145 true 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:49:30.145 true 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:49:30.145 03:46:11 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2267293 00:50:26.373 00:50:26.373 job0: (groupid=0, jobs=1): err= 0: pid=2267411: Tue Jun 11 03:47:05 2024 00:50:26.373 read: IOPS=7, BW=30.1KiB/s (30.8kB/s)(1808KiB/60041msec) 00:50:26.373 slat (nsec): min=3604, max=49522, avg=5686.56, stdev=2671.88 00:50:26.373 clat (usec): min=560, max=41488k, avg=132551.19, stdev=1949528.77 00:50:26.373 lat (usec): min=569, max=41488k, avg=132556.88, stdev=1949528.84 00:50:26.373 clat percentiles (msec): 00:50:26.373 | 1.00th=[ 41], 5.00th=[ 42], 10.00th=[ 42], 20.00th=[ 42], 00:50:26.373 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 42], 00:50:26.373 | 70.00th=[ 42], 80.00th=[ 42], 90.00th=[ 42], 95.00th=[ 42], 00:50:26.373 | 99.00th=[ 43], 99.50th=[ 44], 99.90th=[17113], 99.95th=[17113], 00:50:26.373 | 99.99th=[17113] 00:50:26.373 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60041msec); 0 zone resets 00:50:26.373 slat (nsec): min=4844, max=42519, avg=11027.80, stdev=2610.25 00:50:26.373 clat (usec): min=191, max=438, avg=228.48, stdev=17.85 00:50:26.373 lat (usec): min=196, max=481, avg=239.51, stdev=19.06 00:50:26.373 clat percentiles (usec): 00:50:26.373 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:50:26.373 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 229], 60.00th=[ 231], 00:50:26.373 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 241], 95.00th=[ 247], 00:50:26.373 | 99.00th=[ 265], 99.50th=[ 326], 99.90th=[ 441], 99.95th=[ 441], 00:50:26.373 | 99.99th=[ 441] 00:50:26.374 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:50:26.374 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:50:26.374 lat (usec) : 250=51.04%, 500=2.07%, 750=0.21% 00:50:26.374 lat (msec) : 50=46.58%, >=2000=0.10% 00:50:26.374 cpu : usr=0.01%, sys=0.01%, ctx=965, majf=0, minf=2 00:50:26.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:50:26.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:50:26.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:50:26.374 issued rwts: total=452,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:50:26.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:50:26.374 00:50:26.374 Run status group 0 (all jobs): 00:50:26.374 READ: bw=30.1KiB/s (30.8kB/s), 30.1KiB/s-30.1KiB/s (30.8kB/s-30.8kB/s), io=1808KiB (1851kB), run=60041-60041msec 00:50:26.374 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60041-60041msec 00:50:26.374 00:50:26.374 Disk stats (read/write): 00:50:26.374 nvme0n1: ios=547/512, merge=0/0, ticks=18333/107, in_queue=18440, util=99.60% 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:50:26.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1218 -- # local i=0 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1230 -- # return 0 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:50:26.374 nvmf hotplug test: fio successful as expected 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:50:26.374 rmmod nvme_tcp 00:50:26.374 rmmod nvme_fabrics 00:50:26.374 rmmod nvme_keyring 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 2266590 ']' 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 2266590 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@949 -- # '[' -z 2266590 ']' 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # kill -0 2266590 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # uname 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2266590 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2266590' 00:50:26.374 killing process with pid 2266590 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # kill 2266590 00:50:26.374 03:47:05 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # wait 2266590 00:50:26.374 03:47:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:50:26.374 03:47:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:50:26.374 03:47:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:50:26.374 03:47:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:50:26.374 03:47:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:50:26.374 03:47:06 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:26.374 03:47:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:50:26.374 03:47:06 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:26.941 03:47:08 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:50:26.941 00:50:26.941 real 1m12.184s 00:50:26.941 user 4m22.365s 00:50:26.941 sys 0m5.658s 00:50:26.941 03:47:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # xtrace_disable 00:50:26.941 03:47:08 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:50:26.941 ************************************ 00:50:26.941 END TEST nvmf_initiator_timeout 00:50:26.941 ************************************ 00:50:26.941 03:47:08 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:50:26.941 03:47:08 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:50:26.941 03:47:08 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:50:26.941 03:47:08 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:50:26.941 03:47:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:50:33.503 Found 0000:86:00.0 (0x8086 - 0x159b) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:50:33.503 Found 0000:86:00.1 (0x8086 - 0x159b) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:50:33.503 Found net devices under 0000:86:00.0: cvl_0_0 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:50:33.503 03:47:14 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:33.504 03:47:14 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:50:33.504 03:47:14 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:33.504 03:47:14 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:50:33.504 03:47:14 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:50:33.504 03:47:14 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:33.504 03:47:14 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:50:33.504 Found net devices under 0000:86:00.1: cvl_0_1 00:50:33.504 03:47:14 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:50:33.504 03:47:14 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:50:33.504 03:47:14 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:50:33.504 03:47:14 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:50:33.504 03:47:14 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:50:33.504 03:47:14 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:50:33.504 03:47:14 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:50:33.504 03:47:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:50:33.504 ************************************ 00:50:33.504 START TEST nvmf_perf_adq 00:50:33.504 ************************************ 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:50:33.504 * Looking for test storage... 00:50:33.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:50:33.504 03:47:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:50:38.830 Found 0000:86:00.0 (0x8086 - 0x159b) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:50:38.830 Found 0000:86:00.1 (0x8086 - 0x159b) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:50:38.830 Found net devices under 0000:86:00.0: cvl_0_0 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:38.830 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:50:38.831 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:38.831 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:50:38.831 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:50:38.831 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:38.831 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:50:38.831 Found net devices under 0000:86:00.1: cvl_0_1 00:50:38.831 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:50:38.831 03:47:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:50:38.831 03:47:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:50:38.831 03:47:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:50:38.831 03:47:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:50:38.831 03:47:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:50:38.831 03:47:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:50:39.765 03:47:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:50:41.667 03:47:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:50:46.937 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:50:46.938 Found 0000:86:00.0 (0x8086 - 0x159b) 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:50:46.938 Found 0000:86:00.1 (0x8086 - 0x159b) 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:50:46.938 Found net devices under 0000:86:00.0: cvl_0_0 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:50:46.938 Found net devices under 0000:86:00.1: cvl_0_1 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:50:46.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:50:46.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:50:46.938 00:50:46.938 --- 10.0.0.2 ping statistics --- 00:50:46.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:46.938 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:50:46.938 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:50:46.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:50:46.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:50:46.938 00:50:46.938 --- 10.0.0.1 ping statistics --- 00:50:46.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:46.938 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2285562 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2285562 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 2285562 ']' 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:47.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:50:47.197 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:50:47.197 [2024-06-11 03:47:28.419236] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:50:47.197 [2024-06-11 03:47:28.419275] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:50:47.197 EAL: No free 2048 kB hugepages reported on node 1 00:50:47.197 [2024-06-11 03:47:28.482465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:50:47.198 [2024-06-11 03:47:28.524401] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:50:47.198 [2024-06-11 03:47:28.524438] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:50:47.198 [2024-06-11 03:47:28.524445] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:50:47.198 [2024-06-11 03:47:28.524454] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:50:47.198 [2024-06-11 03:47:28.524459] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:50:47.198 [2024-06-11 03:47:28.524498] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:50:47.198 [2024-06-11 03:47:28.524598] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:50:47.198 [2024-06-11 03:47:28.524689] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:50:47.198 [2024-06-11 03:47:28.524690] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:50:47.198 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:50:47.198 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:50:47.198 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:50:47.198 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:50:47.198 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:50:47.198 03:47:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:50:47.198 03:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:50:47.198 03:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:50:47.198 03:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:50:47.198 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:47.198 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:50:47.198 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:47.456 03:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:50:47.456 03:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:50:47.456 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:47.456 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:50:47.456 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:47.456 03:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:50:47.456 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:47.456 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:50:47.456 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:47.456 03:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:50:47.456 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:47.456 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:50:47.456 [2024-06-11 03:47:28.724243] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:47.456 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:47.456 03:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:50:47.456 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:50:47.457 Malloc1 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:50:47.457 [2024-06-11 03:47:28.771807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2285743 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:50:47.457 03:47:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:50:47.457 EAL: No free 2048 kB hugepages reported on node 1 00:50:49.994 03:47:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:50:49.994 03:47:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:50:49.994 03:47:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:50:49.994 03:47:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:50:49.994 03:47:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:50:49.994 "tick_rate": 2100000000, 00:50:49.994 "poll_groups": [ 00:50:49.995 { 00:50:49.995 "name": "nvmf_tgt_poll_group_000", 00:50:49.995 "admin_qpairs": 1, 00:50:49.995 "io_qpairs": 1, 00:50:49.995 "current_admin_qpairs": 1, 00:50:49.995 "current_io_qpairs": 1, 00:50:49.995 "pending_bdev_io": 0, 00:50:49.995 "completed_nvme_io": 19139, 00:50:49.995 "transports": [ 00:50:49.995 { 00:50:49.995 "trtype": "TCP" 00:50:49.995 } 00:50:49.995 ] 00:50:49.995 }, 00:50:49.995 { 00:50:49.995 "name": "nvmf_tgt_poll_group_001", 00:50:49.995 "admin_qpairs": 0, 00:50:49.995 "io_qpairs": 1, 00:50:49.995 "current_admin_qpairs": 0, 00:50:49.995 "current_io_qpairs": 1, 00:50:49.995 "pending_bdev_io": 0, 00:50:49.995 "completed_nvme_io": 19545, 00:50:49.995 "transports": [ 00:50:49.995 { 00:50:49.995 "trtype": "TCP" 00:50:49.995 } 00:50:49.995 ] 00:50:49.995 }, 00:50:49.995 { 00:50:49.995 "name": "nvmf_tgt_poll_group_002", 00:50:49.995 "admin_qpairs": 0, 00:50:49.995 "io_qpairs": 1, 00:50:49.995 "current_admin_qpairs": 0, 00:50:49.995 "current_io_qpairs": 1, 00:50:49.995 "pending_bdev_io": 0, 00:50:49.995 "completed_nvme_io": 19497, 00:50:49.995 "transports": [ 00:50:49.995 { 00:50:49.995 "trtype": "TCP" 00:50:49.995 } 00:50:49.995 ] 00:50:49.995 }, 00:50:49.995 { 00:50:49.995 "name": "nvmf_tgt_poll_group_003", 00:50:49.995 "admin_qpairs": 0, 00:50:49.995 "io_qpairs": 1, 00:50:49.995 "current_admin_qpairs": 0, 00:50:49.995 "current_io_qpairs": 1, 00:50:49.995 "pending_bdev_io": 0, 00:50:49.995 "completed_nvme_io": 19133, 00:50:49.995 "transports": [ 00:50:49.995 { 00:50:49.995 "trtype": "TCP" 00:50:49.995 } 00:50:49.995 ] 00:50:49.995 } 00:50:49.995 ] 00:50:49.995 }' 00:50:49.995 03:47:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:50:49.995 03:47:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:50:49.995 03:47:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:50:49.995 03:47:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:50:49.995 03:47:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2285743 00:50:58.119 Initializing NVMe Controllers 00:50:58.119 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:50:58.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:50:58.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:50:58.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:50:58.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:50:58.119 Initialization complete. Launching workers. 00:50:58.119 ======================================================== 00:50:58.119 Latency(us) 00:50:58.119 Device Information : IOPS MiB/s Average min max 00:50:58.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10098.30 39.45 6337.98 1770.70 11086.18 00:50:58.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10310.50 40.28 6207.58 1556.52 10681.10 00:50:58.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10275.50 40.14 6228.04 2239.15 11061.17 00:50:58.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10151.40 39.65 6305.90 2244.57 11108.52 00:50:58.119 ======================================================== 00:50:58.119 Total : 40835.68 159.51 6269.42 1556.52 11108.52 00:50:58.119 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:50:58.119 rmmod nvme_tcp 00:50:58.119 rmmod nvme_fabrics 00:50:58.119 rmmod nvme_keyring 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2285562 ']' 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2285562 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 2285562 ']' 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 2285562 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:50:58.119 03:47:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2285562 00:50:58.119 03:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:50:58.119 03:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:50:58.119 03:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2285562' 00:50:58.119 killing process with pid 2285562 00:50:58.119 03:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 2285562 00:50:58.119 03:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 2285562 00:50:58.119 03:47:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:50:58.119 03:47:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:50:58.119 03:47:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:50:58.119 03:47:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:50:58.119 03:47:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:50:58.119 03:47:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:58.119 03:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:50:58.119 03:47:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:00.024 03:47:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:51:00.024 03:47:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:51:00.024 03:47:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:51:01.401 03:47:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:51:03.307 03:47:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:51:08.582 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:51:08.583 Found 0000:86:00.0 (0x8086 - 0x159b) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:51:08.583 Found 0000:86:00.1 (0x8086 - 0x159b) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:51:08.583 Found net devices under 0000:86:00.0: cvl_0_0 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:51:08.583 Found net devices under 0000:86:00.1: cvl_0_1 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:51:08.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:08.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:51:08.583 00:51:08.583 --- 10.0.0.2 ping statistics --- 00:51:08.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:08.583 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:51:08.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:08.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:51:08.583 00:51:08.583 --- 10.0.0.1 ping statistics --- 00:51:08.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:08.583 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:51:08.583 net.core.busy_poll = 1 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:51:08.583 net.core.busy_read = 1 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@723 -- # xtrace_disable 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2289359 00:51:08.583 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2289359 00:51:08.584 03:47:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:51:08.584 03:47:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # '[' -z 2289359 ']' 00:51:08.584 03:47:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:08.584 03:47:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local max_retries=100 00:51:08.584 03:47:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:08.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:08.584 03:47:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@839 -- # xtrace_disable 00:51:08.584 03:47:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:51:08.584 [2024-06-11 03:47:49.968675] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:51:08.584 [2024-06-11 03:47:49.968719] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:51:08.842 EAL: No free 2048 kB hugepages reported on node 1 00:51:08.842 [2024-06-11 03:47:50.039343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:51:08.842 [2024-06-11 03:47:50.083534] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:51:08.842 [2024-06-11 03:47:50.083573] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:51:08.842 [2024-06-11 03:47:50.083584] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:51:08.842 [2024-06-11 03:47:50.083591] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:51:08.842 [2024-06-11 03:47:50.083598] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:51:08.842 [2024-06-11 03:47:50.083644] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:51:08.842 [2024-06-11 03:47:50.083668] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:51:08.842 [2024-06-11 03:47:50.083755] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:51:08.842 [2024-06-11 03:47:50.083759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:51:09.409 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:51:09.409 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@863 -- # return 0 00:51:09.409 03:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:51:09.409 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@729 -- # xtrace_disable 00:51:09.409 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:51:09.409 03:47:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:09.410 03:47:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:51:09.410 03:47:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:51:09.410 03:47:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:51:09.410 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:09.410 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:51:09.410 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:09.668 03:47:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:51:09.668 03:47:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:51:09.668 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:09.668 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:51:09.668 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:51:09.669 [2024-06-11 03:47:50.937610] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:51:09.669 Malloc1 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:51:09.669 [2024-06-11 03:47:50.989197] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2289608 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:51:09.669 03:47:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:51:09.669 EAL: No free 2048 kB hugepages reported on node 1 00:51:11.645 03:47:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:51:11.645 03:47:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:11.645 03:47:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:51:11.645 03:47:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:11.645 03:47:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:51:11.645 "tick_rate": 2100000000, 00:51:11.645 "poll_groups": [ 00:51:11.645 { 00:51:11.645 "name": "nvmf_tgt_poll_group_000", 00:51:11.645 "admin_qpairs": 1, 00:51:11.645 "io_qpairs": 2, 00:51:11.645 "current_admin_qpairs": 1, 00:51:11.645 "current_io_qpairs": 2, 00:51:11.645 "pending_bdev_io": 0, 00:51:11.645 "completed_nvme_io": 28298, 00:51:11.645 "transports": [ 00:51:11.645 { 00:51:11.645 "trtype": "TCP" 00:51:11.645 } 00:51:11.645 ] 00:51:11.645 }, 00:51:11.645 { 00:51:11.645 "name": "nvmf_tgt_poll_group_001", 00:51:11.645 "admin_qpairs": 0, 00:51:11.645 "io_qpairs": 2, 00:51:11.645 "current_admin_qpairs": 0, 00:51:11.645 "current_io_qpairs": 2, 00:51:11.645 "pending_bdev_io": 0, 00:51:11.645 "completed_nvme_io": 29651, 00:51:11.645 "transports": [ 00:51:11.645 { 00:51:11.645 "trtype": "TCP" 00:51:11.645 } 00:51:11.645 ] 00:51:11.645 }, 00:51:11.645 { 00:51:11.645 "name": "nvmf_tgt_poll_group_002", 00:51:11.645 "admin_qpairs": 0, 00:51:11.645 "io_qpairs": 0, 00:51:11.645 "current_admin_qpairs": 0, 00:51:11.645 "current_io_qpairs": 0, 00:51:11.645 "pending_bdev_io": 0, 00:51:11.645 "completed_nvme_io": 0, 00:51:11.645 "transports": [ 00:51:11.645 { 00:51:11.645 "trtype": "TCP" 00:51:11.645 } 00:51:11.645 ] 00:51:11.645 }, 00:51:11.645 { 00:51:11.645 "name": "nvmf_tgt_poll_group_003", 00:51:11.645 "admin_qpairs": 0, 00:51:11.645 "io_qpairs": 0, 00:51:11.645 "current_admin_qpairs": 0, 00:51:11.645 "current_io_qpairs": 0, 00:51:11.645 "pending_bdev_io": 0, 00:51:11.645 "completed_nvme_io": 0, 00:51:11.645 "transports": [ 00:51:11.645 { 00:51:11.645 "trtype": "TCP" 00:51:11.645 } 00:51:11.645 ] 00:51:11.645 } 00:51:11.645 ] 00:51:11.645 }' 00:51:11.645 03:47:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:51:11.645 03:47:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:51:11.904 03:47:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:51:11.904 03:47:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:51:11.904 03:47:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2289608 00:51:20.034 Initializing NVMe Controllers 00:51:20.034 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:51:20.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:51:20.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:51:20.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:51:20.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:51:20.035 Initialization complete. Launching workers. 00:51:20.035 ======================================================== 00:51:20.035 Latency(us) 00:51:20.035 Device Information : IOPS MiB/s Average min max 00:51:20.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7161.70 27.98 8971.79 1443.79 52265.39 00:51:20.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6968.80 27.22 9186.13 1205.89 54049.23 00:51:20.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8235.60 32.17 7770.39 1372.42 53089.85 00:51:20.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8968.60 35.03 7159.15 1356.45 51497.21 00:51:20.035 ======================================================== 00:51:20.035 Total : 31334.70 122.40 8184.89 1205.89 54049.23 00:51:20.035 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:51:20.035 rmmod nvme_tcp 00:51:20.035 rmmod nvme_fabrics 00:51:20.035 rmmod nvme_keyring 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2289359 ']' 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2289359 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@949 -- # '[' -z 2289359 ']' 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # kill -0 2289359 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # uname 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2289359 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2289359' 00:51:20.035 killing process with pid 2289359 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@968 -- # kill 2289359 00:51:20.035 03:48:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@973 -- # wait 2289359 00:51:20.293 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:51:20.293 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:51:20.293 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:51:20.293 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:51:20.293 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:51:20.293 03:48:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:20.293 03:48:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:51:20.293 03:48:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:22.198 03:48:03 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:51:22.198 03:48:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:51:22.198 00:51:22.198 real 0m49.442s 00:51:22.198 user 2m46.831s 00:51:22.198 sys 0m9.595s 00:51:22.198 03:48:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:51:22.198 03:48:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:51:22.198 ************************************ 00:51:22.198 END TEST nvmf_perf_adq 00:51:22.198 ************************************ 00:51:22.458 03:48:03 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:51:22.458 03:48:03 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:51:22.458 03:48:03 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:51:22.458 03:48:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:51:22.458 ************************************ 00:51:22.458 START TEST nvmf_shutdown 00:51:22.458 ************************************ 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:51:22.458 * Looking for test storage... 00:51:22.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:51:22.458 ************************************ 00:51:22.458 START TEST nvmf_shutdown_tc1 00:51:22.458 ************************************ 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:51:22.458 03:48:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:51:22.459 03:48:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:51:22.459 03:48:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:22.459 03:48:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:51:22.459 03:48:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:22.459 03:48:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:51:22.459 03:48:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:51:22.459 03:48:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:51:22.459 03:48:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:51:29.028 Found 0000:86:00.0 (0x8086 - 0x159b) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:51:29.028 Found 0000:86:00.1 (0x8086 - 0x159b) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:51:29.028 Found net devices under 0000:86:00.0: cvl_0_0 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:51:29.028 Found net devices under 0000:86:00.1: cvl_0_1 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:51:29.028 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:51:29.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:29.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:51:29.028 00:51:29.028 --- 10.0.0.2 ping statistics --- 00:51:29.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:29.028 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:51:29.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:29.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:51:29.029 00:51:29.029 --- 10.0.0.1 ping statistics --- 00:51:29.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:29.029 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2295619 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2295619 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 2295619 ']' 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:29.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:51:29.029 03:48:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:51:29.029 [2024-06-11 03:48:09.839418] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:51:29.029 [2024-06-11 03:48:09.839461] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:51:29.029 EAL: No free 2048 kB hugepages reported on node 1 00:51:29.029 [2024-06-11 03:48:09.904191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:51:29.029 [2024-06-11 03:48:09.945123] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:51:29.029 [2024-06-11 03:48:09.945163] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:51:29.029 [2024-06-11 03:48:09.945169] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:51:29.029 [2024-06-11 03:48:09.945175] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:51:29.029 [2024-06-11 03:48:09.945180] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:51:29.029 [2024-06-11 03:48:09.945244] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:51:29.029 [2024-06-11 03:48:09.945351] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:51:29.029 [2024-06-11 03:48:09.945458] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:51:29.029 [2024-06-11 03:48:09.945460] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:51:29.288 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:51:29.288 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:51:29.288 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:51:29.288 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:51:29.288 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:51:29.288 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:29.288 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:51:29.288 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:29.288 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:51:29.288 [2024-06-11 03:48:10.687892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:29.548 03:48:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:51:29.548 Malloc1 00:51:29.548 [2024-06-11 03:48:10.783712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:51:29.548 Malloc2 00:51:29.548 Malloc3 00:51:29.548 Malloc4 00:51:29.548 Malloc5 00:51:29.807 Malloc6 00:51:29.807 Malloc7 00:51:29.807 Malloc8 00:51:29.807 Malloc9 00:51:29.807 Malloc10 00:51:29.807 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:29.807 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:51:29.807 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:51:29.807 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:51:29.807 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2295906 00:51:29.807 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2295906 /var/tmp/bdevperf.sock 00:51:29.807 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 2295906 ']' 00:51:29.807 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:51:29.807 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:51:29.807 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:51:29.807 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:51:29.807 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:51:29.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:51:29.807 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:51:29.807 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:30.068 { 00:51:30.068 "params": { 00:51:30.068 "name": "Nvme$subsystem", 00:51:30.068 "trtype": "$TEST_TRANSPORT", 00:51:30.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:30.068 "adrfam": "ipv4", 00:51:30.068 "trsvcid": "$NVMF_PORT", 00:51:30.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:30.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:30.068 "hdgst": ${hdgst:-false}, 00:51:30.068 "ddgst": ${ddgst:-false} 00:51:30.068 }, 00:51:30.068 "method": "bdev_nvme_attach_controller" 00:51:30.068 } 00:51:30.068 EOF 00:51:30.068 )") 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:30.068 { 00:51:30.068 "params": { 00:51:30.068 "name": "Nvme$subsystem", 00:51:30.068 "trtype": "$TEST_TRANSPORT", 00:51:30.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:30.068 "adrfam": "ipv4", 00:51:30.068 "trsvcid": "$NVMF_PORT", 00:51:30.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:30.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:30.068 "hdgst": ${hdgst:-false}, 00:51:30.068 "ddgst": ${ddgst:-false} 00:51:30.068 }, 00:51:30.068 "method": "bdev_nvme_attach_controller" 00:51:30.068 } 00:51:30.068 EOF 00:51:30.068 )") 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:30.068 { 00:51:30.068 "params": { 00:51:30.068 "name": "Nvme$subsystem", 00:51:30.068 "trtype": "$TEST_TRANSPORT", 00:51:30.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:30.068 "adrfam": "ipv4", 00:51:30.068 "trsvcid": "$NVMF_PORT", 00:51:30.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:30.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:30.068 "hdgst": ${hdgst:-false}, 00:51:30.068 "ddgst": ${ddgst:-false} 00:51:30.068 }, 00:51:30.068 "method": "bdev_nvme_attach_controller" 00:51:30.068 } 00:51:30.068 EOF 00:51:30.068 )") 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:30.068 { 00:51:30.068 "params": { 00:51:30.068 "name": "Nvme$subsystem", 00:51:30.068 "trtype": "$TEST_TRANSPORT", 00:51:30.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:30.068 "adrfam": "ipv4", 00:51:30.068 "trsvcid": "$NVMF_PORT", 00:51:30.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:30.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:30.068 "hdgst": ${hdgst:-false}, 00:51:30.068 "ddgst": ${ddgst:-false} 00:51:30.068 }, 00:51:30.068 "method": "bdev_nvme_attach_controller" 00:51:30.068 } 00:51:30.068 EOF 00:51:30.068 )") 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:30.068 { 00:51:30.068 "params": { 00:51:30.068 "name": "Nvme$subsystem", 00:51:30.068 "trtype": "$TEST_TRANSPORT", 00:51:30.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:30.068 "adrfam": "ipv4", 00:51:30.068 "trsvcid": "$NVMF_PORT", 00:51:30.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:30.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:30.068 "hdgst": ${hdgst:-false}, 00:51:30.068 "ddgst": ${ddgst:-false} 00:51:30.068 }, 00:51:30.068 "method": "bdev_nvme_attach_controller" 00:51:30.068 } 00:51:30.068 EOF 00:51:30.068 )") 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:30.068 { 00:51:30.068 "params": { 00:51:30.068 "name": "Nvme$subsystem", 00:51:30.068 "trtype": "$TEST_TRANSPORT", 00:51:30.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:30.068 "adrfam": "ipv4", 00:51:30.068 "trsvcid": "$NVMF_PORT", 00:51:30.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:30.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:30.068 "hdgst": ${hdgst:-false}, 00:51:30.068 "ddgst": ${ddgst:-false} 00:51:30.068 }, 00:51:30.068 "method": "bdev_nvme_attach_controller" 00:51:30.068 } 00:51:30.068 EOF 00:51:30.068 )") 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:30.068 { 00:51:30.068 "params": { 00:51:30.068 "name": "Nvme$subsystem", 00:51:30.068 "trtype": "$TEST_TRANSPORT", 00:51:30.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:30.068 "adrfam": "ipv4", 00:51:30.068 "trsvcid": "$NVMF_PORT", 00:51:30.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:30.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:30.068 "hdgst": ${hdgst:-false}, 00:51:30.068 "ddgst": ${ddgst:-false} 00:51:30.068 }, 00:51:30.068 "method": "bdev_nvme_attach_controller" 00:51:30.068 } 00:51:30.068 EOF 00:51:30.068 )") 00:51:30.068 [2024-06-11 03:48:11.252623] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:51:30.068 [2024-06-11 03:48:11.252671] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:30.068 { 00:51:30.068 "params": { 00:51:30.068 "name": "Nvme$subsystem", 00:51:30.068 "trtype": "$TEST_TRANSPORT", 00:51:30.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:30.068 "adrfam": "ipv4", 00:51:30.068 "trsvcid": "$NVMF_PORT", 00:51:30.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:30.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:30.068 "hdgst": ${hdgst:-false}, 00:51:30.068 "ddgst": ${ddgst:-false} 00:51:30.068 }, 00:51:30.068 "method": "bdev_nvme_attach_controller" 00:51:30.068 } 00:51:30.068 EOF 00:51:30.068 )") 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:30.068 { 00:51:30.068 "params": { 00:51:30.068 "name": "Nvme$subsystem", 00:51:30.068 "trtype": "$TEST_TRANSPORT", 00:51:30.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:30.068 "adrfam": "ipv4", 00:51:30.068 "trsvcid": "$NVMF_PORT", 00:51:30.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:30.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:30.068 "hdgst": ${hdgst:-false}, 00:51:30.068 "ddgst": ${ddgst:-false} 00:51:30.068 }, 00:51:30.068 "method": "bdev_nvme_attach_controller" 00:51:30.068 } 00:51:30.068 EOF 00:51:30.068 )") 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:30.068 { 00:51:30.068 "params": { 00:51:30.068 "name": "Nvme$subsystem", 00:51:30.068 "trtype": "$TEST_TRANSPORT", 00:51:30.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:30.068 "adrfam": "ipv4", 00:51:30.068 "trsvcid": "$NVMF_PORT", 00:51:30.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:30.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:30.068 "hdgst": ${hdgst:-false}, 00:51:30.068 "ddgst": ${ddgst:-false} 00:51:30.068 }, 00:51:30.068 "method": "bdev_nvme_attach_controller" 00:51:30.068 } 00:51:30.068 EOF 00:51:30.068 )") 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:51:30.068 EAL: No free 2048 kB hugepages reported on node 1 00:51:30.068 03:48:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:51:30.068 "params": { 00:51:30.068 "name": "Nvme1", 00:51:30.068 "trtype": "tcp", 00:51:30.068 "traddr": "10.0.0.2", 00:51:30.068 "adrfam": "ipv4", 00:51:30.068 "trsvcid": "4420", 00:51:30.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:51:30.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:51:30.068 "hdgst": false, 00:51:30.069 "ddgst": false 00:51:30.069 }, 00:51:30.069 "method": "bdev_nvme_attach_controller" 00:51:30.069 },{ 00:51:30.069 "params": { 00:51:30.069 "name": "Nvme2", 00:51:30.069 "trtype": "tcp", 00:51:30.069 "traddr": "10.0.0.2", 00:51:30.069 "adrfam": "ipv4", 00:51:30.069 "trsvcid": "4420", 00:51:30.069 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:51:30.069 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:51:30.069 "hdgst": false, 00:51:30.069 "ddgst": false 00:51:30.069 }, 00:51:30.069 "method": "bdev_nvme_attach_controller" 00:51:30.069 },{ 00:51:30.069 "params": { 00:51:30.069 "name": "Nvme3", 00:51:30.069 "trtype": "tcp", 00:51:30.069 "traddr": "10.0.0.2", 00:51:30.069 "adrfam": "ipv4", 00:51:30.069 "trsvcid": "4420", 00:51:30.069 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:51:30.069 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:51:30.069 "hdgst": false, 00:51:30.069 "ddgst": false 00:51:30.069 }, 00:51:30.069 "method": "bdev_nvme_attach_controller" 00:51:30.069 },{ 00:51:30.069 "params": { 00:51:30.069 "name": "Nvme4", 00:51:30.069 "trtype": "tcp", 00:51:30.069 "traddr": "10.0.0.2", 00:51:30.069 "adrfam": "ipv4", 00:51:30.069 "trsvcid": "4420", 00:51:30.069 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:51:30.069 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:51:30.069 "hdgst": false, 00:51:30.069 "ddgst": false 00:51:30.069 }, 00:51:30.069 "method": "bdev_nvme_attach_controller" 00:51:30.069 },{ 00:51:30.069 "params": { 00:51:30.069 "name": "Nvme5", 00:51:30.069 "trtype": "tcp", 00:51:30.069 "traddr": "10.0.0.2", 00:51:30.069 "adrfam": "ipv4", 00:51:30.069 "trsvcid": "4420", 00:51:30.069 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:51:30.069 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:51:30.069 "hdgst": false, 00:51:30.069 "ddgst": false 00:51:30.069 }, 00:51:30.069 "method": "bdev_nvme_attach_controller" 00:51:30.069 },{ 00:51:30.069 "params": { 00:51:30.069 "name": "Nvme6", 00:51:30.069 "trtype": "tcp", 00:51:30.069 "traddr": "10.0.0.2", 00:51:30.069 "adrfam": "ipv4", 00:51:30.069 "trsvcid": "4420", 00:51:30.069 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:51:30.069 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:51:30.069 "hdgst": false, 00:51:30.069 "ddgst": false 00:51:30.069 }, 00:51:30.069 "method": "bdev_nvme_attach_controller" 00:51:30.069 },{ 00:51:30.069 "params": { 00:51:30.069 "name": "Nvme7", 00:51:30.069 "trtype": "tcp", 00:51:30.069 "traddr": "10.0.0.2", 00:51:30.069 "adrfam": "ipv4", 00:51:30.069 "trsvcid": "4420", 00:51:30.069 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:51:30.069 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:51:30.069 "hdgst": false, 00:51:30.069 "ddgst": false 00:51:30.069 }, 00:51:30.069 "method": "bdev_nvme_attach_controller" 00:51:30.069 },{ 00:51:30.069 "params": { 00:51:30.069 "name": "Nvme8", 00:51:30.069 "trtype": "tcp", 00:51:30.069 "traddr": "10.0.0.2", 00:51:30.069 "adrfam": "ipv4", 00:51:30.069 "trsvcid": "4420", 00:51:30.069 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:51:30.069 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:51:30.069 "hdgst": false, 00:51:30.069 "ddgst": false 00:51:30.069 }, 00:51:30.069 "method": "bdev_nvme_attach_controller" 00:51:30.069 },{ 00:51:30.069 "params": { 00:51:30.069 "name": "Nvme9", 00:51:30.069 "trtype": "tcp", 00:51:30.069 "traddr": "10.0.0.2", 00:51:30.069 "adrfam": "ipv4", 00:51:30.069 "trsvcid": "4420", 00:51:30.069 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:51:30.069 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:51:30.069 "hdgst": false, 00:51:30.069 "ddgst": false 00:51:30.069 }, 00:51:30.069 "method": "bdev_nvme_attach_controller" 00:51:30.069 },{ 00:51:30.069 "params": { 00:51:30.069 "name": "Nvme10", 00:51:30.069 "trtype": "tcp", 00:51:30.069 "traddr": "10.0.0.2", 00:51:30.069 "adrfam": "ipv4", 00:51:30.069 "trsvcid": "4420", 00:51:30.069 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:51:30.069 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:51:30.069 "hdgst": false, 00:51:30.069 "ddgst": false 00:51:30.069 }, 00:51:30.069 "method": "bdev_nvme_attach_controller" 00:51:30.069 }' 00:51:30.069 [2024-06-11 03:48:11.319332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:30.069 [2024-06-11 03:48:11.359229] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:51:31.445 03:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:51:31.445 03:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:51:31.445 03:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:51:31.445 03:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:31.445 03:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:51:31.445 03:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:31.445 03:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2295906 00:51:31.445 03:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:51:31.445 03:48:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:51:32.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2295906 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:51:32.822 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2295619 00:51:32.822 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:51:32.822 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:51:32.822 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:51:32.822 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:51:32.822 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:32.822 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:32.822 { 00:51:32.822 "params": { 00:51:32.822 "name": "Nvme$subsystem", 00:51:32.822 "trtype": "$TEST_TRANSPORT", 00:51:32.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:32.822 "adrfam": "ipv4", 00:51:32.822 "trsvcid": "$NVMF_PORT", 00:51:32.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:32.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:32.822 "hdgst": ${hdgst:-false}, 00:51:32.822 "ddgst": ${ddgst:-false} 00:51:32.822 }, 00:51:32.822 "method": "bdev_nvme_attach_controller" 00:51:32.822 } 00:51:32.822 EOF 00:51:32.822 )") 00:51:32.822 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:32.823 { 00:51:32.823 "params": { 00:51:32.823 "name": "Nvme$subsystem", 00:51:32.823 "trtype": "$TEST_TRANSPORT", 00:51:32.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:32.823 "adrfam": "ipv4", 00:51:32.823 "trsvcid": "$NVMF_PORT", 00:51:32.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:32.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:32.823 "hdgst": ${hdgst:-false}, 00:51:32.823 "ddgst": ${ddgst:-false} 00:51:32.823 }, 00:51:32.823 "method": "bdev_nvme_attach_controller" 00:51:32.823 } 00:51:32.823 EOF 00:51:32.823 )") 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:32.823 { 00:51:32.823 "params": { 00:51:32.823 "name": "Nvme$subsystem", 00:51:32.823 "trtype": "$TEST_TRANSPORT", 00:51:32.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:32.823 "adrfam": "ipv4", 00:51:32.823 "trsvcid": "$NVMF_PORT", 00:51:32.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:32.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:32.823 "hdgst": ${hdgst:-false}, 00:51:32.823 "ddgst": ${ddgst:-false} 00:51:32.823 }, 00:51:32.823 "method": "bdev_nvme_attach_controller" 00:51:32.823 } 00:51:32.823 EOF 00:51:32.823 )") 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:32.823 { 00:51:32.823 "params": { 00:51:32.823 "name": "Nvme$subsystem", 00:51:32.823 "trtype": "$TEST_TRANSPORT", 00:51:32.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:32.823 "adrfam": "ipv4", 00:51:32.823 "trsvcid": "$NVMF_PORT", 00:51:32.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:32.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:32.823 "hdgst": ${hdgst:-false}, 00:51:32.823 "ddgst": ${ddgst:-false} 00:51:32.823 }, 00:51:32.823 "method": "bdev_nvme_attach_controller" 00:51:32.823 } 00:51:32.823 EOF 00:51:32.823 )") 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:32.823 { 00:51:32.823 "params": { 00:51:32.823 "name": "Nvme$subsystem", 00:51:32.823 "trtype": "$TEST_TRANSPORT", 00:51:32.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:32.823 "adrfam": "ipv4", 00:51:32.823 "trsvcid": "$NVMF_PORT", 00:51:32.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:32.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:32.823 "hdgst": ${hdgst:-false}, 00:51:32.823 "ddgst": ${ddgst:-false} 00:51:32.823 }, 00:51:32.823 "method": "bdev_nvme_attach_controller" 00:51:32.823 } 00:51:32.823 EOF 00:51:32.823 )") 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:32.823 { 00:51:32.823 "params": { 00:51:32.823 "name": "Nvme$subsystem", 00:51:32.823 "trtype": "$TEST_TRANSPORT", 00:51:32.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:32.823 "adrfam": "ipv4", 00:51:32.823 "trsvcid": "$NVMF_PORT", 00:51:32.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:32.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:32.823 "hdgst": ${hdgst:-false}, 00:51:32.823 "ddgst": ${ddgst:-false} 00:51:32.823 }, 00:51:32.823 "method": "bdev_nvme_attach_controller" 00:51:32.823 } 00:51:32.823 EOF 00:51:32.823 )") 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:32.823 [2024-06-11 03:48:13.842374] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:32.823 { 00:51:32.823 "params": { 00:51:32.823 "name": "Nvme$subsystem", 00:51:32.823 "trtype": "$TEST_TRANSPORT", 00:51:32.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:32.823 "adrfam": "ipv4", 00:51:32.823 "trsvcid": "$NVMF_PORT", 00:51:32.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:32.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:32.823 "hdgst": ${hdgst:-false}, 00:51:32.823 "ddgst": ${ddgst:-false} 00:51:32.823 }, 00:51:32.823 "method": "bdev_nvme_attach_controller" 00:51:32.823 } 00:51:32.823 EOF 00:51:32.823 )") 00:51:32.823 [2024-06-11 03:48:13.842424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2296408 ] 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:32.823 { 00:51:32.823 "params": { 00:51:32.823 "name": "Nvme$subsystem", 00:51:32.823 "trtype": "$TEST_TRANSPORT", 00:51:32.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:32.823 "adrfam": "ipv4", 00:51:32.823 "trsvcid": "$NVMF_PORT", 00:51:32.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:32.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:32.823 "hdgst": ${hdgst:-false}, 00:51:32.823 "ddgst": ${ddgst:-false} 00:51:32.823 }, 00:51:32.823 "method": "bdev_nvme_attach_controller" 00:51:32.823 } 00:51:32.823 EOF 00:51:32.823 )") 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:32.823 { 00:51:32.823 "params": { 00:51:32.823 "name": "Nvme$subsystem", 00:51:32.823 "trtype": "$TEST_TRANSPORT", 00:51:32.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:32.823 "adrfam": "ipv4", 00:51:32.823 "trsvcid": "$NVMF_PORT", 00:51:32.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:32.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:32.823 "hdgst": ${hdgst:-false}, 00:51:32.823 "ddgst": ${ddgst:-false} 00:51:32.823 }, 00:51:32.823 "method": "bdev_nvme_attach_controller" 00:51:32.823 } 00:51:32.823 EOF 00:51:32.823 )") 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:32.823 { 00:51:32.823 "params": { 00:51:32.823 "name": "Nvme$subsystem", 00:51:32.823 "trtype": "$TEST_TRANSPORT", 00:51:32.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:32.823 "adrfam": "ipv4", 00:51:32.823 "trsvcid": "$NVMF_PORT", 00:51:32.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:32.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:32.823 "hdgst": ${hdgst:-false}, 00:51:32.823 "ddgst": ${ddgst:-false} 00:51:32.823 }, 00:51:32.823 "method": "bdev_nvme_attach_controller" 00:51:32.823 } 00:51:32.823 EOF 00:51:32.823 )") 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:51:32.823 EAL: No free 2048 kB hugepages reported on node 1 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:51:32.823 03:48:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:51:32.823 "params": { 00:51:32.823 "name": "Nvme1", 00:51:32.823 "trtype": "tcp", 00:51:32.823 "traddr": "10.0.0.2", 00:51:32.823 "adrfam": "ipv4", 00:51:32.823 "trsvcid": "4420", 00:51:32.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:51:32.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:51:32.823 "hdgst": false, 00:51:32.823 "ddgst": false 00:51:32.823 }, 00:51:32.823 "method": "bdev_nvme_attach_controller" 00:51:32.823 },{ 00:51:32.823 "params": { 00:51:32.823 "name": "Nvme2", 00:51:32.823 "trtype": "tcp", 00:51:32.823 "traddr": "10.0.0.2", 00:51:32.823 "adrfam": "ipv4", 00:51:32.823 "trsvcid": "4420", 00:51:32.823 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:51:32.823 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:51:32.823 "hdgst": false, 00:51:32.823 "ddgst": false 00:51:32.823 }, 00:51:32.823 "method": "bdev_nvme_attach_controller" 00:51:32.823 },{ 00:51:32.823 "params": { 00:51:32.823 "name": "Nvme3", 00:51:32.823 "trtype": "tcp", 00:51:32.823 "traddr": "10.0.0.2", 00:51:32.823 "adrfam": "ipv4", 00:51:32.823 "trsvcid": "4420", 00:51:32.823 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:51:32.823 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:51:32.823 "hdgst": false, 00:51:32.823 "ddgst": false 00:51:32.823 }, 00:51:32.823 "method": "bdev_nvme_attach_controller" 00:51:32.824 },{ 00:51:32.824 "params": { 00:51:32.824 "name": "Nvme4", 00:51:32.824 "trtype": "tcp", 00:51:32.824 "traddr": "10.0.0.2", 00:51:32.824 "adrfam": "ipv4", 00:51:32.824 "trsvcid": "4420", 00:51:32.824 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:51:32.824 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:51:32.824 "hdgst": false, 00:51:32.824 "ddgst": false 00:51:32.824 }, 00:51:32.824 "method": "bdev_nvme_attach_controller" 00:51:32.824 },{ 00:51:32.824 "params": { 00:51:32.824 "name": "Nvme5", 00:51:32.824 "trtype": "tcp", 00:51:32.824 "traddr": "10.0.0.2", 00:51:32.824 "adrfam": "ipv4", 00:51:32.824 "trsvcid": "4420", 00:51:32.824 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:51:32.824 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:51:32.824 "hdgst": false, 00:51:32.824 "ddgst": false 00:51:32.824 }, 00:51:32.824 "method": "bdev_nvme_attach_controller" 00:51:32.824 },{ 00:51:32.824 "params": { 00:51:32.824 "name": "Nvme6", 00:51:32.824 "trtype": "tcp", 00:51:32.824 "traddr": "10.0.0.2", 00:51:32.824 "adrfam": "ipv4", 00:51:32.824 "trsvcid": "4420", 00:51:32.824 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:51:32.824 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:51:32.824 "hdgst": false, 00:51:32.824 "ddgst": false 00:51:32.824 }, 00:51:32.824 "method": "bdev_nvme_attach_controller" 00:51:32.824 },{ 00:51:32.824 "params": { 00:51:32.824 "name": "Nvme7", 00:51:32.824 "trtype": "tcp", 00:51:32.824 "traddr": "10.0.0.2", 00:51:32.824 "adrfam": "ipv4", 00:51:32.824 "trsvcid": "4420", 00:51:32.824 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:51:32.824 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:51:32.824 "hdgst": false, 00:51:32.824 "ddgst": false 00:51:32.824 }, 00:51:32.824 "method": "bdev_nvme_attach_controller" 00:51:32.824 },{ 00:51:32.824 "params": { 00:51:32.824 "name": "Nvme8", 00:51:32.824 "trtype": "tcp", 00:51:32.824 "traddr": "10.0.0.2", 00:51:32.824 "adrfam": "ipv4", 00:51:32.824 "trsvcid": "4420", 00:51:32.824 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:51:32.824 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:51:32.824 "hdgst": false, 00:51:32.824 "ddgst": false 00:51:32.824 }, 00:51:32.824 "method": "bdev_nvme_attach_controller" 00:51:32.824 },{ 00:51:32.824 "params": { 00:51:32.824 "name": "Nvme9", 00:51:32.824 "trtype": "tcp", 00:51:32.824 "traddr": "10.0.0.2", 00:51:32.824 "adrfam": "ipv4", 00:51:32.824 "trsvcid": "4420", 00:51:32.824 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:51:32.824 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:51:32.824 "hdgst": false, 00:51:32.824 "ddgst": false 00:51:32.824 }, 00:51:32.824 "method": "bdev_nvme_attach_controller" 00:51:32.824 },{ 00:51:32.824 "params": { 00:51:32.824 "name": "Nvme10", 00:51:32.824 "trtype": "tcp", 00:51:32.824 "traddr": "10.0.0.2", 00:51:32.824 "adrfam": "ipv4", 00:51:32.824 "trsvcid": "4420", 00:51:32.824 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:51:32.824 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:51:32.824 "hdgst": false, 00:51:32.824 "ddgst": false 00:51:32.824 }, 00:51:32.824 "method": "bdev_nvme_attach_controller" 00:51:32.824 }' 00:51:32.824 [2024-06-11 03:48:13.904607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:32.824 [2024-06-11 03:48:13.944652] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:51:33.760 Running I/O for 1 seconds... 00:51:35.141 00:51:35.141 Latency(us) 00:51:35.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:35.141 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:35.141 Verification LBA range: start 0x0 length 0x400 00:51:35.141 Nvme1n1 : 1.11 288.00 18.00 0.00 0.00 220242.80 15541.39 212711.13 00:51:35.141 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:35.141 Verification LBA range: start 0x0 length 0x400 00:51:35.141 Nvme2n1 : 1.10 291.03 18.19 0.00 0.00 214883.38 18100.42 212711.13 00:51:35.141 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:35.141 Verification LBA range: start 0x0 length 0x400 00:51:35.141 Nvme3n1 : 1.12 286.99 17.94 0.00 0.00 214819.79 16602.45 211712.49 00:51:35.141 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:35.141 Verification LBA range: start 0x0 length 0x400 00:51:35.141 Nvme4n1 : 1.08 297.41 18.59 0.00 0.00 203795.89 14854.83 209715.20 00:51:35.141 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:35.141 Verification LBA range: start 0x0 length 0x400 00:51:35.141 Nvme5n1 : 1.07 239.53 14.97 0.00 0.00 249364.48 17476.27 225693.50 00:51:35.141 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:35.141 Verification LBA range: start 0x0 length 0x400 00:51:35.141 Nvme6n1 : 1.12 291.98 18.25 0.00 0.00 202046.38 1443.35 210713.84 00:51:35.141 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:35.141 Verification LBA range: start 0x0 length 0x400 00:51:35.141 Nvme7n1 : 1.10 289.83 18.11 0.00 0.00 200361.50 15603.81 206719.27 00:51:35.141 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:35.141 Verification LBA range: start 0x0 length 0x400 00:51:35.141 Nvme8n1 : 1.12 285.94 17.87 0.00 0.00 200358.67 14605.17 215707.06 00:51:35.141 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:35.141 Verification LBA range: start 0x0 length 0x400 00:51:35.141 Nvme9n1 : 1.13 284.32 17.77 0.00 0.00 198554.58 17725.93 225693.50 00:51:35.141 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:35.141 Verification LBA range: start 0x0 length 0x400 00:51:35.141 Nvme10n1 : 1.16 330.21 20.64 0.00 0.00 169134.73 7084.13 234681.30 00:51:35.141 =================================================================================================================== 00:51:35.141 Total : 2885.26 180.33 0.00 0.00 205742.38 1443.35 234681.30 00:51:35.141 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:51:35.141 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:51:35.141 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:51:35.141 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:51:35.141 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:51:35.141 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:51:35.141 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:51:35.141 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:51:35.141 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:51:35.141 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:51:35.141 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:51:35.141 rmmod nvme_tcp 00:51:35.141 rmmod nvme_fabrics 00:51:35.401 rmmod nvme_keyring 00:51:35.401 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:51:35.401 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:51:35.401 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:51:35.401 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2295619 ']' 00:51:35.401 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2295619 00:51:35.401 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 2295619 ']' 00:51:35.401 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 2295619 00:51:35.401 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:51:35.401 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:51:35.401 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2295619 00:51:35.401 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:51:35.401 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:51:35.401 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2295619' 00:51:35.401 killing process with pid 2295619 00:51:35.401 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 2295619 00:51:35.401 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 2295619 00:51:35.660 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:51:35.660 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:51:35.660 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:51:35.660 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:51:35.660 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:51:35.660 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:35.660 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:51:35.660 03:48:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:51:38.198 00:51:38.198 real 0m15.250s 00:51:38.198 user 0m33.526s 00:51:38.198 sys 0m5.778s 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:51:38.198 ************************************ 00:51:38.198 END TEST nvmf_shutdown_tc1 00:51:38.198 ************************************ 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:51:38.198 ************************************ 00:51:38.198 START TEST nvmf_shutdown_tc2 00:51:38.198 ************************************ 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:51:38.198 Found 0000:86:00.0 (0x8086 - 0x159b) 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:51:38.198 Found 0000:86:00.1 (0x8086 - 0x159b) 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:51:38.198 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:51:38.199 Found net devices under 0000:86:00.0: cvl_0_0 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:51:38.199 Found net devices under 0000:86:00.1: cvl_0_1 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:51:38.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:38.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:51:38.199 00:51:38.199 --- 10.0.0.2 ping statistics --- 00:51:38.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:38.199 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:51:38.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:38.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:51:38.199 00:51:38.199 --- 10.0.0.1 ping statistics --- 00:51:38.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:38.199 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2297424 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2297424 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2297424 ']' 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:38.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:51:38.199 03:48:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:51:38.199 [2024-06-11 03:48:19.505409] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:51:38.199 [2024-06-11 03:48:19.505452] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:51:38.199 EAL: No free 2048 kB hugepages reported on node 1 00:51:38.199 [2024-06-11 03:48:19.567913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:51:38.459 [2024-06-11 03:48:19.608321] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:51:38.459 [2024-06-11 03:48:19.608362] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:51:38.459 [2024-06-11 03:48:19.608369] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:51:38.459 [2024-06-11 03:48:19.608375] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:51:38.459 [2024-06-11 03:48:19.608380] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:51:38.459 [2024-06-11 03:48:19.608424] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:51:38.459 [2024-06-11 03:48:19.608510] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:51:38.459 [2024-06-11 03:48:19.608598] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:51:38.459 [2024-06-11 03:48:19.608599] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:51:39.026 [2024-06-11 03:48:20.343034] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:39.026 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:51:39.026 Malloc1 00:51:39.284 [2024-06-11 03:48:20.438590] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:51:39.284 Malloc2 00:51:39.284 Malloc3 00:51:39.284 Malloc4 00:51:39.284 Malloc5 00:51:39.284 Malloc6 00:51:39.284 Malloc7 00:51:39.543 Malloc8 00:51:39.543 Malloc9 00:51:39.543 Malloc10 00:51:39.543 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:39.543 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:51:39.543 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:51:39.543 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:51:39.543 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2297701 00:51:39.543 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2297701 /var/tmp/bdevperf.sock 00:51:39.543 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2297701 ']' 00:51:39.543 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:51:39.543 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:51:39.543 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:51:39.543 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:51:39.543 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:51:39.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:39.544 { 00:51:39.544 "params": { 00:51:39.544 "name": "Nvme$subsystem", 00:51:39.544 "trtype": "$TEST_TRANSPORT", 00:51:39.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:39.544 "adrfam": "ipv4", 00:51:39.544 "trsvcid": "$NVMF_PORT", 00:51:39.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:39.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:39.544 "hdgst": ${hdgst:-false}, 00:51:39.544 "ddgst": ${ddgst:-false} 00:51:39.544 }, 00:51:39.544 "method": "bdev_nvme_attach_controller" 00:51:39.544 } 00:51:39.544 EOF 00:51:39.544 )") 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:39.544 { 00:51:39.544 "params": { 00:51:39.544 "name": "Nvme$subsystem", 00:51:39.544 "trtype": "$TEST_TRANSPORT", 00:51:39.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:39.544 "adrfam": "ipv4", 00:51:39.544 "trsvcid": "$NVMF_PORT", 00:51:39.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:39.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:39.544 "hdgst": ${hdgst:-false}, 00:51:39.544 "ddgst": ${ddgst:-false} 00:51:39.544 }, 00:51:39.544 "method": "bdev_nvme_attach_controller" 00:51:39.544 } 00:51:39.544 EOF 00:51:39.544 )") 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:39.544 { 00:51:39.544 "params": { 00:51:39.544 "name": "Nvme$subsystem", 00:51:39.544 "trtype": "$TEST_TRANSPORT", 00:51:39.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:39.544 "adrfam": "ipv4", 00:51:39.544 "trsvcid": "$NVMF_PORT", 00:51:39.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:39.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:39.544 "hdgst": ${hdgst:-false}, 00:51:39.544 "ddgst": ${ddgst:-false} 00:51:39.544 }, 00:51:39.544 "method": "bdev_nvme_attach_controller" 00:51:39.544 } 00:51:39.544 EOF 00:51:39.544 )") 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:39.544 { 00:51:39.544 "params": { 00:51:39.544 "name": "Nvme$subsystem", 00:51:39.544 "trtype": "$TEST_TRANSPORT", 00:51:39.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:39.544 "adrfam": "ipv4", 00:51:39.544 "trsvcid": "$NVMF_PORT", 00:51:39.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:39.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:39.544 "hdgst": ${hdgst:-false}, 00:51:39.544 "ddgst": ${ddgst:-false} 00:51:39.544 }, 00:51:39.544 "method": "bdev_nvme_attach_controller" 00:51:39.544 } 00:51:39.544 EOF 00:51:39.544 )") 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:39.544 { 00:51:39.544 "params": { 00:51:39.544 "name": "Nvme$subsystem", 00:51:39.544 "trtype": "$TEST_TRANSPORT", 00:51:39.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:39.544 "adrfam": "ipv4", 00:51:39.544 "trsvcid": "$NVMF_PORT", 00:51:39.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:39.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:39.544 "hdgst": ${hdgst:-false}, 00:51:39.544 "ddgst": ${ddgst:-false} 00:51:39.544 }, 00:51:39.544 "method": "bdev_nvme_attach_controller" 00:51:39.544 } 00:51:39.544 EOF 00:51:39.544 )") 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:39.544 { 00:51:39.544 "params": { 00:51:39.544 "name": "Nvme$subsystem", 00:51:39.544 "trtype": "$TEST_TRANSPORT", 00:51:39.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:39.544 "adrfam": "ipv4", 00:51:39.544 "trsvcid": "$NVMF_PORT", 00:51:39.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:39.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:39.544 "hdgst": ${hdgst:-false}, 00:51:39.544 "ddgst": ${ddgst:-false} 00:51:39.544 }, 00:51:39.544 "method": "bdev_nvme_attach_controller" 00:51:39.544 } 00:51:39.544 EOF 00:51:39.544 )") 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:39.544 { 00:51:39.544 "params": { 00:51:39.544 "name": "Nvme$subsystem", 00:51:39.544 "trtype": "$TEST_TRANSPORT", 00:51:39.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:39.544 "adrfam": "ipv4", 00:51:39.544 "trsvcid": "$NVMF_PORT", 00:51:39.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:39.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:39.544 "hdgst": ${hdgst:-false}, 00:51:39.544 "ddgst": ${ddgst:-false} 00:51:39.544 }, 00:51:39.544 "method": "bdev_nvme_attach_controller" 00:51:39.544 } 00:51:39.544 EOF 00:51:39.544 )") 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:51:39.544 [2024-06-11 03:48:20.908866] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:51:39.544 [2024-06-11 03:48:20.908917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2297701 ] 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:39.544 { 00:51:39.544 "params": { 00:51:39.544 "name": "Nvme$subsystem", 00:51:39.544 "trtype": "$TEST_TRANSPORT", 00:51:39.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:39.544 "adrfam": "ipv4", 00:51:39.544 "trsvcid": "$NVMF_PORT", 00:51:39.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:39.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:39.544 "hdgst": ${hdgst:-false}, 00:51:39.544 "ddgst": ${ddgst:-false} 00:51:39.544 }, 00:51:39.544 "method": "bdev_nvme_attach_controller" 00:51:39.544 } 00:51:39.544 EOF 00:51:39.544 )") 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:39.544 { 00:51:39.544 "params": { 00:51:39.544 "name": "Nvme$subsystem", 00:51:39.544 "trtype": "$TEST_TRANSPORT", 00:51:39.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:39.544 "adrfam": "ipv4", 00:51:39.544 "trsvcid": "$NVMF_PORT", 00:51:39.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:39.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:39.544 "hdgst": ${hdgst:-false}, 00:51:39.544 "ddgst": ${ddgst:-false} 00:51:39.544 }, 00:51:39.544 "method": "bdev_nvme_attach_controller" 00:51:39.544 } 00:51:39.544 EOF 00:51:39.544 )") 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:39.544 { 00:51:39.544 "params": { 00:51:39.544 "name": "Nvme$subsystem", 00:51:39.544 "trtype": "$TEST_TRANSPORT", 00:51:39.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:39.544 "adrfam": "ipv4", 00:51:39.544 "trsvcid": "$NVMF_PORT", 00:51:39.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:39.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:39.544 "hdgst": ${hdgst:-false}, 00:51:39.544 "ddgst": ${ddgst:-false} 00:51:39.544 }, 00:51:39.544 "method": "bdev_nvme_attach_controller" 00:51:39.544 } 00:51:39.544 EOF 00:51:39.544 )") 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:51:39.544 03:48:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:51:39.544 "params": { 00:51:39.544 "name": "Nvme1", 00:51:39.544 "trtype": "tcp", 00:51:39.544 "traddr": "10.0.0.2", 00:51:39.544 "adrfam": "ipv4", 00:51:39.544 "trsvcid": "4420", 00:51:39.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:51:39.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:51:39.545 "hdgst": false, 00:51:39.545 "ddgst": false 00:51:39.545 }, 00:51:39.545 "method": "bdev_nvme_attach_controller" 00:51:39.545 },{ 00:51:39.545 "params": { 00:51:39.545 "name": "Nvme2", 00:51:39.545 "trtype": "tcp", 00:51:39.545 "traddr": "10.0.0.2", 00:51:39.545 "adrfam": "ipv4", 00:51:39.545 "trsvcid": "4420", 00:51:39.545 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:51:39.545 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:51:39.545 "hdgst": false, 00:51:39.545 "ddgst": false 00:51:39.545 }, 00:51:39.545 "method": "bdev_nvme_attach_controller" 00:51:39.545 },{ 00:51:39.545 "params": { 00:51:39.545 "name": "Nvme3", 00:51:39.545 "trtype": "tcp", 00:51:39.545 "traddr": "10.0.0.2", 00:51:39.545 "adrfam": "ipv4", 00:51:39.545 "trsvcid": "4420", 00:51:39.545 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:51:39.545 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:51:39.545 "hdgst": false, 00:51:39.545 "ddgst": false 00:51:39.545 }, 00:51:39.545 "method": "bdev_nvme_attach_controller" 00:51:39.545 },{ 00:51:39.545 "params": { 00:51:39.545 "name": "Nvme4", 00:51:39.545 "trtype": "tcp", 00:51:39.545 "traddr": "10.0.0.2", 00:51:39.545 "adrfam": "ipv4", 00:51:39.545 "trsvcid": "4420", 00:51:39.545 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:51:39.545 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:51:39.545 "hdgst": false, 00:51:39.545 "ddgst": false 00:51:39.545 }, 00:51:39.545 "method": "bdev_nvme_attach_controller" 00:51:39.545 },{ 00:51:39.545 "params": { 00:51:39.545 "name": "Nvme5", 00:51:39.545 "trtype": "tcp", 00:51:39.545 "traddr": "10.0.0.2", 00:51:39.545 "adrfam": "ipv4", 00:51:39.545 "trsvcid": "4420", 00:51:39.545 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:51:39.545 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:51:39.545 "hdgst": false, 00:51:39.545 "ddgst": false 00:51:39.545 }, 00:51:39.545 "method": "bdev_nvme_attach_controller" 00:51:39.545 },{ 00:51:39.545 "params": { 00:51:39.545 "name": "Nvme6", 00:51:39.545 "trtype": "tcp", 00:51:39.545 "traddr": "10.0.0.2", 00:51:39.545 "adrfam": "ipv4", 00:51:39.545 "trsvcid": "4420", 00:51:39.545 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:51:39.545 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:51:39.545 "hdgst": false, 00:51:39.545 "ddgst": false 00:51:39.545 }, 00:51:39.545 "method": "bdev_nvme_attach_controller" 00:51:39.545 },{ 00:51:39.545 "params": { 00:51:39.545 "name": "Nvme7", 00:51:39.545 "trtype": "tcp", 00:51:39.545 "traddr": "10.0.0.2", 00:51:39.545 "adrfam": "ipv4", 00:51:39.545 "trsvcid": "4420", 00:51:39.545 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:51:39.545 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:51:39.545 "hdgst": false, 00:51:39.545 "ddgst": false 00:51:39.545 }, 00:51:39.545 "method": "bdev_nvme_attach_controller" 00:51:39.545 },{ 00:51:39.545 "params": { 00:51:39.545 "name": "Nvme8", 00:51:39.545 "trtype": "tcp", 00:51:39.545 "traddr": "10.0.0.2", 00:51:39.545 "adrfam": "ipv4", 00:51:39.545 "trsvcid": "4420", 00:51:39.545 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:51:39.545 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:51:39.545 "hdgst": false, 00:51:39.545 "ddgst": false 00:51:39.545 }, 00:51:39.545 "method": "bdev_nvme_attach_controller" 00:51:39.545 },{ 00:51:39.545 "params": { 00:51:39.545 "name": "Nvme9", 00:51:39.545 "trtype": "tcp", 00:51:39.545 "traddr": "10.0.0.2", 00:51:39.545 "adrfam": "ipv4", 00:51:39.545 "trsvcid": "4420", 00:51:39.545 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:51:39.545 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:51:39.545 "hdgst": false, 00:51:39.545 "ddgst": false 00:51:39.545 }, 00:51:39.545 "method": "bdev_nvme_attach_controller" 00:51:39.545 },{ 00:51:39.545 "params": { 00:51:39.545 "name": "Nvme10", 00:51:39.545 "trtype": "tcp", 00:51:39.545 "traddr": "10.0.0.2", 00:51:39.545 "adrfam": "ipv4", 00:51:39.545 "trsvcid": "4420", 00:51:39.545 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:51:39.545 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:51:39.545 "hdgst": false, 00:51:39.545 "ddgst": false 00:51:39.545 }, 00:51:39.545 "method": "bdev_nvme_attach_controller" 00:51:39.545 }' 00:51:39.545 EAL: No free 2048 kB hugepages reported on node 1 00:51:39.803 [2024-06-11 03:48:20.971257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:39.803 [2024-06-11 03:48:21.010905] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:51:41.177 Running I/O for 10 seconds... 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:51:41.436 03:48:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:51:41.695 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:51:41.695 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:51:41.695 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:51:41.695 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:51:41.695 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:41.695 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:51:41.695 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:41.996 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:51:41.996 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:51:41.996 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:51:41.996 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:51:41.996 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:51:41.996 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2297701 00:51:41.996 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 2297701 ']' 00:51:41.996 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 2297701 00:51:41.996 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:51:41.996 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:51:41.996 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2297701 00:51:41.996 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:51:41.996 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:51:41.996 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2297701' 00:51:41.996 killing process with pid 2297701 00:51:41.996 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 2297701 00:51:41.996 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 2297701 00:51:41.996 Received shutdown signal, test time was about 0.722506 seconds 00:51:41.996 00:51:41.996 Latency(us) 00:51:41.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:41.996 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:41.996 Verification LBA range: start 0x0 length 0x400 00:51:41.996 Nvme1n1 : 0.69 278.75 17.42 0.00 0.00 226607.62 25715.08 198730.12 00:51:41.996 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:41.996 Verification LBA range: start 0x0 length 0x400 00:51:41.996 Nvme2n1 : 0.71 270.35 16.90 0.00 0.00 228492.43 18849.40 210713.84 00:51:41.996 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:41.996 Verification LBA range: start 0x0 length 0x400 00:51:41.996 Nvme3n1 : 0.72 355.66 22.23 0.00 0.00 169885.26 13544.11 208716.56 00:51:41.996 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:41.996 Verification LBA range: start 0x0 length 0x400 00:51:41.996 Nvme4n1 : 0.69 277.56 17.35 0.00 0.00 212195.23 13356.86 209715.20 00:51:41.996 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:41.996 Verification LBA range: start 0x0 length 0x400 00:51:41.996 Nvme5n1 : 0.72 265.98 16.62 0.00 0.00 217182.27 18974.23 240673.16 00:51:41.996 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:41.996 Verification LBA range: start 0x0 length 0x400 00:51:41.996 Nvme6n1 : 0.71 272.17 17.01 0.00 0.00 206633.77 19598.38 216705.71 00:51:41.996 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:41.996 Verification LBA range: start 0x0 length 0x400 00:51:41.996 Nvme7n1 : 0.70 284.31 17.77 0.00 0.00 189655.64 3339.22 199728.76 00:51:41.996 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:41.996 Verification LBA range: start 0x0 length 0x400 00:51:41.996 Nvme8n1 : 0.70 273.30 17.08 0.00 0.00 195151.64 15978.30 195734.19 00:51:41.996 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:41.996 Verification LBA range: start 0x0 length 0x400 00:51:41.996 Nvme9n1 : 0.71 269.32 16.83 0.00 0.00 193712.52 17476.27 212711.13 00:51:41.996 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:41.996 Verification LBA range: start 0x0 length 0x400 00:51:41.996 Nvme10n1 : 0.72 268.02 16.75 0.00 0.00 189710.47 18724.57 222697.57 00:51:41.996 =================================================================================================================== 00:51:41.996 Total : 2815.42 175.96 0.00 0.00 201820.17 3339.22 240673.16 00:51:42.299 03:48:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:51:43.233 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2297424 00:51:43.233 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:51:43.233 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:51:43.233 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:51:43.233 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:51:43.233 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:51:43.233 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:51:43.233 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:51:43.233 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:51:43.233 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:51:43.233 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:51:43.233 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:51:43.233 rmmod nvme_tcp 00:51:43.233 rmmod nvme_fabrics 00:51:43.233 rmmod nvme_keyring 00:51:43.233 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:51:43.233 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:51:43.233 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:51:43.234 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2297424 ']' 00:51:43.234 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2297424 00:51:43.234 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 2297424 ']' 00:51:43.234 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 2297424 00:51:43.234 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:51:43.234 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:51:43.234 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2297424 00:51:43.234 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:51:43.234 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:51:43.234 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2297424' 00:51:43.234 killing process with pid 2297424 00:51:43.234 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 2297424 00:51:43.234 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 2297424 00:51:43.491 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:51:43.491 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:51:43.491 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:51:43.491 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:51:43.491 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:51:43.491 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:43.491 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:51:43.491 03:48:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:46.025 03:48:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:51:46.025 00:51:46.025 real 0m7.829s 00:51:46.025 user 0m23.533s 00:51:46.025 sys 0m1.292s 00:51:46.025 03:48:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:51:46.025 03:48:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:51:46.025 ************************************ 00:51:46.025 END TEST nvmf_shutdown_tc2 00:51:46.025 ************************************ 00:51:46.025 03:48:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:51:46.025 03:48:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:51:46.025 03:48:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:51:46.025 03:48:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:51:46.025 ************************************ 00:51:46.025 START TEST nvmf_shutdown_tc3 00:51:46.025 ************************************ 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:51:46.025 Found 0000:86:00.0 (0x8086 - 0x159b) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:51:46.025 Found 0000:86:00.1 (0x8086 - 0x159b) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:51:46.025 Found net devices under 0000:86:00.0: cvl_0_0 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:51:46.025 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:51:46.026 Found net devices under 0000:86:00.1: cvl_0_1 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:51:46.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:46.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:51:46.026 00:51:46.026 --- 10.0.0.2 ping statistics --- 00:51:46.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:46.026 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:51:46.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:46.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:51:46.026 00:51:46.026 --- 10.0.0.1 ping statistics --- 00:51:46.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:46.026 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2298757 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2298757 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 2298757 ']' 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:46.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:51:46.026 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:51:46.026 [2024-06-11 03:48:27.350954] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:51:46.026 [2024-06-11 03:48:27.350999] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:51:46.026 EAL: No free 2048 kB hugepages reported on node 1 00:51:46.026 [2024-06-11 03:48:27.414849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:51:46.285 [2024-06-11 03:48:27.456659] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:51:46.285 [2024-06-11 03:48:27.456696] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:51:46.285 [2024-06-11 03:48:27.456703] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:51:46.285 [2024-06-11 03:48:27.456709] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:51:46.285 [2024-06-11 03:48:27.456714] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:51:46.285 [2024-06-11 03:48:27.456818] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:51:46.285 [2024-06-11 03:48:27.456907] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:51:46.285 [2024-06-11 03:48:27.457019] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:51:46.285 [2024-06-11 03:48:27.457032] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:51:46.285 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:51:46.285 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:51:46.285 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:51:46.285 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:51:46.285 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:51:46.285 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:46.285 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:51:46.285 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:46.285 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:51:46.285 [2024-06-11 03:48:27.596028] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:46.285 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:46.285 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:51:46.285 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:46.286 03:48:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:51:46.286 Malloc1 00:51:46.544 [2024-06-11 03:48:27.691774] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:51:46.544 Malloc2 00:51:46.544 Malloc3 00:51:46.544 Malloc4 00:51:46.544 Malloc5 00:51:46.544 Malloc6 00:51:46.544 Malloc7 00:51:46.803 Malloc8 00:51:46.803 Malloc9 00:51:46.804 Malloc10 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2299027 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2299027 /var/tmp/bdevperf.sock 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 2299027 ']' 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:51:46.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:46.804 { 00:51:46.804 "params": { 00:51:46.804 "name": "Nvme$subsystem", 00:51:46.804 "trtype": "$TEST_TRANSPORT", 00:51:46.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:46.804 "adrfam": "ipv4", 00:51:46.804 "trsvcid": "$NVMF_PORT", 00:51:46.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:46.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:46.804 "hdgst": ${hdgst:-false}, 00:51:46.804 "ddgst": ${ddgst:-false} 00:51:46.804 }, 00:51:46.804 "method": "bdev_nvme_attach_controller" 00:51:46.804 } 00:51:46.804 EOF 00:51:46.804 )") 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:46.804 { 00:51:46.804 "params": { 00:51:46.804 "name": "Nvme$subsystem", 00:51:46.804 "trtype": "$TEST_TRANSPORT", 00:51:46.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:46.804 "adrfam": "ipv4", 00:51:46.804 "trsvcid": "$NVMF_PORT", 00:51:46.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:46.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:46.804 "hdgst": ${hdgst:-false}, 00:51:46.804 "ddgst": ${ddgst:-false} 00:51:46.804 }, 00:51:46.804 "method": "bdev_nvme_attach_controller" 00:51:46.804 } 00:51:46.804 EOF 00:51:46.804 )") 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:46.804 { 00:51:46.804 "params": { 00:51:46.804 "name": "Nvme$subsystem", 00:51:46.804 "trtype": "$TEST_TRANSPORT", 00:51:46.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:46.804 "adrfam": "ipv4", 00:51:46.804 "trsvcid": "$NVMF_PORT", 00:51:46.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:46.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:46.804 "hdgst": ${hdgst:-false}, 00:51:46.804 "ddgst": ${ddgst:-false} 00:51:46.804 }, 00:51:46.804 "method": "bdev_nvme_attach_controller" 00:51:46.804 } 00:51:46.804 EOF 00:51:46.804 )") 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:46.804 { 00:51:46.804 "params": { 00:51:46.804 "name": "Nvme$subsystem", 00:51:46.804 "trtype": "$TEST_TRANSPORT", 00:51:46.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:46.804 "adrfam": "ipv4", 00:51:46.804 "trsvcid": "$NVMF_PORT", 00:51:46.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:46.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:46.804 "hdgst": ${hdgst:-false}, 00:51:46.804 "ddgst": ${ddgst:-false} 00:51:46.804 }, 00:51:46.804 "method": "bdev_nvme_attach_controller" 00:51:46.804 } 00:51:46.804 EOF 00:51:46.804 )") 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:46.804 { 00:51:46.804 "params": { 00:51:46.804 "name": "Nvme$subsystem", 00:51:46.804 "trtype": "$TEST_TRANSPORT", 00:51:46.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:46.804 "adrfam": "ipv4", 00:51:46.804 "trsvcid": "$NVMF_PORT", 00:51:46.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:46.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:46.804 "hdgst": ${hdgst:-false}, 00:51:46.804 "ddgst": ${ddgst:-false} 00:51:46.804 }, 00:51:46.804 "method": "bdev_nvme_attach_controller" 00:51:46.804 } 00:51:46.804 EOF 00:51:46.804 )") 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:46.804 { 00:51:46.804 "params": { 00:51:46.804 "name": "Nvme$subsystem", 00:51:46.804 "trtype": "$TEST_TRANSPORT", 00:51:46.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:46.804 "adrfam": "ipv4", 00:51:46.804 "trsvcid": "$NVMF_PORT", 00:51:46.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:46.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:46.804 "hdgst": ${hdgst:-false}, 00:51:46.804 "ddgst": ${ddgst:-false} 00:51:46.804 }, 00:51:46.804 "method": "bdev_nvme_attach_controller" 00:51:46.804 } 00:51:46.804 EOF 00:51:46.804 )") 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:46.804 { 00:51:46.804 "params": { 00:51:46.804 "name": "Nvme$subsystem", 00:51:46.804 "trtype": "$TEST_TRANSPORT", 00:51:46.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:46.804 "adrfam": "ipv4", 00:51:46.804 "trsvcid": "$NVMF_PORT", 00:51:46.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:46.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:46.804 "hdgst": ${hdgst:-false}, 00:51:46.804 "ddgst": ${ddgst:-false} 00:51:46.804 }, 00:51:46.804 "method": "bdev_nvme_attach_controller" 00:51:46.804 } 00:51:46.804 EOF 00:51:46.804 )") 00:51:46.804 [2024-06-11 03:48:28.159989] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:51:46.804 [2024-06-11 03:48:28.160044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2299027 ] 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:46.804 { 00:51:46.804 "params": { 00:51:46.804 "name": "Nvme$subsystem", 00:51:46.804 "trtype": "$TEST_TRANSPORT", 00:51:46.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:46.804 "adrfam": "ipv4", 00:51:46.804 "trsvcid": "$NVMF_PORT", 00:51:46.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:46.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:46.804 "hdgst": ${hdgst:-false}, 00:51:46.804 "ddgst": ${ddgst:-false} 00:51:46.804 }, 00:51:46.804 "method": "bdev_nvme_attach_controller" 00:51:46.804 } 00:51:46.804 EOF 00:51:46.804 )") 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:46.804 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:46.804 { 00:51:46.804 "params": { 00:51:46.804 "name": "Nvme$subsystem", 00:51:46.804 "trtype": "$TEST_TRANSPORT", 00:51:46.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:46.804 "adrfam": "ipv4", 00:51:46.804 "trsvcid": "$NVMF_PORT", 00:51:46.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:46.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:46.804 "hdgst": ${hdgst:-false}, 00:51:46.805 "ddgst": ${ddgst:-false} 00:51:46.805 }, 00:51:46.805 "method": "bdev_nvme_attach_controller" 00:51:46.805 } 00:51:46.805 EOF 00:51:46.805 )") 00:51:46.805 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:51:46.805 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:51:46.805 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:51:46.805 { 00:51:46.805 "params": { 00:51:46.805 "name": "Nvme$subsystem", 00:51:46.805 "trtype": "$TEST_TRANSPORT", 00:51:46.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:51:46.805 "adrfam": "ipv4", 00:51:46.805 "trsvcid": "$NVMF_PORT", 00:51:46.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:51:46.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:51:46.805 "hdgst": ${hdgst:-false}, 00:51:46.805 "ddgst": ${ddgst:-false} 00:51:46.805 }, 00:51:46.805 "method": "bdev_nvme_attach_controller" 00:51:46.805 } 00:51:46.805 EOF 00:51:46.805 )") 00:51:46.805 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:51:46.805 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:51:46.805 EAL: No free 2048 kB hugepages reported on node 1 00:51:46.805 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:51:46.805 03:48:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:51:46.805 "params": { 00:51:46.805 "name": "Nvme1", 00:51:46.805 "trtype": "tcp", 00:51:46.805 "traddr": "10.0.0.2", 00:51:46.805 "adrfam": "ipv4", 00:51:46.805 "trsvcid": "4420", 00:51:46.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:51:46.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:51:46.805 "hdgst": false, 00:51:46.805 "ddgst": false 00:51:46.805 }, 00:51:46.805 "method": "bdev_nvme_attach_controller" 00:51:46.805 },{ 00:51:46.805 "params": { 00:51:46.805 "name": "Nvme2", 00:51:46.805 "trtype": "tcp", 00:51:46.805 "traddr": "10.0.0.2", 00:51:46.805 "adrfam": "ipv4", 00:51:46.805 "trsvcid": "4420", 00:51:46.805 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:51:46.805 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:51:46.805 "hdgst": false, 00:51:46.805 "ddgst": false 00:51:46.805 }, 00:51:46.805 "method": "bdev_nvme_attach_controller" 00:51:46.805 },{ 00:51:46.805 "params": { 00:51:46.805 "name": "Nvme3", 00:51:46.805 "trtype": "tcp", 00:51:46.805 "traddr": "10.0.0.2", 00:51:46.805 "adrfam": "ipv4", 00:51:46.805 "trsvcid": "4420", 00:51:46.805 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:51:46.805 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:51:46.805 "hdgst": false, 00:51:46.805 "ddgst": false 00:51:46.805 }, 00:51:46.805 "method": "bdev_nvme_attach_controller" 00:51:46.805 },{ 00:51:46.805 "params": { 00:51:46.805 "name": "Nvme4", 00:51:46.805 "trtype": "tcp", 00:51:46.805 "traddr": "10.0.0.2", 00:51:46.805 "adrfam": "ipv4", 00:51:46.805 "trsvcid": "4420", 00:51:46.805 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:51:46.805 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:51:46.805 "hdgst": false, 00:51:46.805 "ddgst": false 00:51:46.805 }, 00:51:46.805 "method": "bdev_nvme_attach_controller" 00:51:46.805 },{ 00:51:46.805 "params": { 00:51:46.805 "name": "Nvme5", 00:51:46.805 "trtype": "tcp", 00:51:46.805 "traddr": "10.0.0.2", 00:51:46.805 "adrfam": "ipv4", 00:51:46.805 "trsvcid": "4420", 00:51:46.805 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:51:46.805 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:51:46.805 "hdgst": false, 00:51:46.805 "ddgst": false 00:51:46.805 }, 00:51:46.805 "method": "bdev_nvme_attach_controller" 00:51:46.805 },{ 00:51:46.805 "params": { 00:51:46.805 "name": "Nvme6", 00:51:46.805 "trtype": "tcp", 00:51:46.805 "traddr": "10.0.0.2", 00:51:46.805 "adrfam": "ipv4", 00:51:46.805 "trsvcid": "4420", 00:51:46.805 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:51:46.805 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:51:46.805 "hdgst": false, 00:51:46.805 "ddgst": false 00:51:46.805 }, 00:51:46.805 "method": "bdev_nvme_attach_controller" 00:51:46.805 },{ 00:51:46.805 "params": { 00:51:46.805 "name": "Nvme7", 00:51:46.805 "trtype": "tcp", 00:51:46.805 "traddr": "10.0.0.2", 00:51:46.805 "adrfam": "ipv4", 00:51:46.805 "trsvcid": "4420", 00:51:46.805 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:51:46.805 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:51:46.805 "hdgst": false, 00:51:46.805 "ddgst": false 00:51:46.805 }, 00:51:46.805 "method": "bdev_nvme_attach_controller" 00:51:46.805 },{ 00:51:46.805 "params": { 00:51:46.805 "name": "Nvme8", 00:51:46.805 "trtype": "tcp", 00:51:46.805 "traddr": "10.0.0.2", 00:51:46.805 "adrfam": "ipv4", 00:51:46.805 "trsvcid": "4420", 00:51:46.805 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:51:46.805 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:51:46.805 "hdgst": false, 00:51:46.805 "ddgst": false 00:51:46.805 }, 00:51:46.805 "method": "bdev_nvme_attach_controller" 00:51:46.805 },{ 00:51:46.805 "params": { 00:51:46.805 "name": "Nvme9", 00:51:46.805 "trtype": "tcp", 00:51:46.805 "traddr": "10.0.0.2", 00:51:46.805 "adrfam": "ipv4", 00:51:46.805 "trsvcid": "4420", 00:51:46.805 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:51:46.805 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:51:46.805 "hdgst": false, 00:51:46.805 "ddgst": false 00:51:46.805 }, 00:51:46.805 "method": "bdev_nvme_attach_controller" 00:51:46.805 },{ 00:51:46.805 "params": { 00:51:46.805 "name": "Nvme10", 00:51:46.805 "trtype": "tcp", 00:51:46.805 "traddr": "10.0.0.2", 00:51:46.805 "adrfam": "ipv4", 00:51:46.805 "trsvcid": "4420", 00:51:46.805 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:51:46.805 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:51:46.805 "hdgst": false, 00:51:46.805 "ddgst": false 00:51:46.805 }, 00:51:46.805 "method": "bdev_nvme_attach_controller" 00:51:46.805 }' 00:51:47.063 [2024-06-11 03:48:28.222421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:47.063 [2024-06-11 03:48:28.261870] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:51:48.964 Running I/O for 10 seconds... 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:51:48.964 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2298757 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 2298757 ']' 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 2298757 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2298757 00:51:49.238 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:51:49.239 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:51:49.239 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2298757' 00:51:49.239 killing process with pid 2298757 00:51:49.239 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 2298757 00:51:49.239 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 2298757 00:51:49.239 [2024-06-11 03:48:30.507765] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507816] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507824] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507831] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507837] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507843] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507851] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507857] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507863] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507868] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507875] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507881] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507888] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507895] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507901] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507908] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507913] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507919] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507925] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507931] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507938] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507944] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507950] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507956] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507963] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507974] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507987] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507993] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.507999] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508005] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508017] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508023] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508029] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508035] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508041] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508047] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508053] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508060] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508066] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508072] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508084] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508090] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508096] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508102] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508108] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508113] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508120] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508126] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508132] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508138] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508146] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508152] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508158] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508163] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508169] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508175] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508181] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508188] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508194] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508200] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.508206] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae4d0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509299] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509329] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509337] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509345] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509352] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509359] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509365] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509372] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509378] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509384] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509390] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509396] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509403] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509409] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509414] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509420] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509428] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.239 [2024-06-11 03:48:30.509440] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509445] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509451] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509457] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509464] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509470] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509475] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509481] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509489] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509495] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509500] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509506] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509512] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509518] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509524] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509530] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509536] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509542] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509549] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509555] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509560] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509566] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509571] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509577] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509582] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509589] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509594] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509603] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509609] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509615] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509620] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509626] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509631] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509638] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509644] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509649] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509655] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509661] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509667] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509672] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509677] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509683] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509689] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509695] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509701] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.509707] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0ed0 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.510780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae970 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.510793] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae970 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511806] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511830] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511838] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511845] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511852] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511859] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511868] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511875] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511882] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511889] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511895] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511902] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511908] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511915] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511921] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511927] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511934] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511941] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511947] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511953] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511958] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511965] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511971] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511977] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511982] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511990] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.511996] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.512002] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.512013] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.512019] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.512026] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.512031] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.512038] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.512047] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.512055] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.512061] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.512068] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.512074] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.240 [2024-06-11 03:48:30.512079] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512085] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512091] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512096] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512103] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512109] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512115] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512120] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512126] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512132] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512137] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512143] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512148] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512155] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512160] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512166] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512172] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512177] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512183] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512189] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512195] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512202] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512208] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512216] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.512221] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aee10 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513309] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513335] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513343] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513350] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513358] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513364] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513371] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513376] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513382] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513388] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513394] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513401] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513408] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513415] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513421] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513427] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513433] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513439] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513445] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513450] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513457] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513463] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513469] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513475] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513481] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513490] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513496] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513504] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513510] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513516] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513523] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513530] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513536] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513542] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513548] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513557] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513564] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513570] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513576] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513582] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513588] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513594] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513600] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513606] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513612] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513619] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513625] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513630] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513636] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513642] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513647] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513653] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513659] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513667] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513673] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513679] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513685] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513690] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513696] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513701] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.241 [2024-06-11 03:48:30.513709] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.513715] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.513721] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af2d0 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514469] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514484] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514491] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514498] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514504] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514511] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514517] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514522] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514528] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514534] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514540] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514546] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514553] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514558] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514565] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514570] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514576] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514584] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514590] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514596] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514603] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514609] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514615] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514621] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514627] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514633] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514639] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514645] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514652] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514659] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514665] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514671] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514676] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514682] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514688] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514694] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514700] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514706] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514712] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514718] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514724] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514729] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514735] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514741] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514749] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514755] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514761] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514767] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514772] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514778] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514784] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514790] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514796] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514802] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514808] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514814] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514820] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514825] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514831] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514836] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514842] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514847] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.514853] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af770 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515696] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515709] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515715] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515721] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515727] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515733] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515741] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515748] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515757] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515763] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515769] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515775] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515780] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515786] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515792] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515798] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.242 [2024-06-11 03:48:30.515804] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515810] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515816] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515822] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515828] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515834] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515840] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515846] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515853] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515858] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515864] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515870] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515876] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515887] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515894] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515902] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515908] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515915] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515921] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515927] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515934] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515940] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515947] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515953] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515958] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515964] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515970] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515975] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515981] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515987] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515992] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.515998] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.516005] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.516015] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.516021] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.516027] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.516032] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.516038] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.516043] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.516049] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.516055] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.516061] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.516067] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.516073] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.516078] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.516085] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9afc30 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517016] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517032] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517038] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517044] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517050] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517057] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517063] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517069] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517075] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517081] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517087] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517093] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517099] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517105] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517110] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517116] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517122] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517133] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517138] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.243 [2024-06-11 03:48:30.517144] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517150] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517155] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517160] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517166] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517173] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517179] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517185] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517192] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517197] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517203] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517209] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517214] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517220] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517226] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517232] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517238] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517243] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517249] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517254] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517262] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517268] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517275] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517283] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517290] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517295] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517301] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517307] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517313] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517319] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517325] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517331] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517337] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517343] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517349] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517354] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517361] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517366] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517373] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517379] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517386] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517392] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.517398] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b00d0 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518181] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518195] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518201] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518208] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518213] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518219] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518225] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518231] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518237] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518243] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518249] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518256] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518262] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518267] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518273] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518279] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518285] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518290] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518296] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518303] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518312] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518318] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518323] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518328] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518335] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518341] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518347] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518354] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518359] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518365] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.518371] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.519398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.244 [2024-06-11 03:48:30.519429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.244 [2024-06-11 03:48:30.519438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.244 [2024-06-11 03:48:30.519445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.244 [2024-06-11 03:48:30.519453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.244 [2024-06-11 03:48:30.519460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.244 [2024-06-11 03:48:30.519467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.244 [2024-06-11 03:48:30.519473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.244 [2024-06-11 03:48:30.519481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1610 is same with the state(5) to be set 00:51:49.244 [2024-06-11 03:48:30.519511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.244 [2024-06-11 03:48:30.519519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2187a90 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.519593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21805d0 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.519678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226dbc0 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.519773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c9190 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.519854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c926f0 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.519931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.519980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.519986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b3fc0 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.520008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.520025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.520032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.520041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.520049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.520055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.520062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.520068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.520074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d2c00 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.520096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.520103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.520110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.520117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.520124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.520130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.520137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.245 [2024-06-11 03:48:30.520143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.245 [2024-06-11 03:48:30.520150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22720f0 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.526561] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.526571] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.526578] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.526585] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.526592] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.526598] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.526604] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.526610] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.526616] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.526623] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.526629] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.245 [2024-06-11 03:48:30.526637] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526645] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526652] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526658] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526664] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526670] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526676] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526682] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526688] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526693] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526699] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526705] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526711] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526718] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526724] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526729] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526735] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526741] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526747] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526753] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.526758] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b0570 is same with the state(5) to be set 00:51:49.246 [2024-06-11 03:48:30.537316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.246 [2024-06-11 03:48:30.537814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.246 [2024-06-11 03:48:30.537822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.537830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.537838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.537846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.537854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.537862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.537870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.537876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.537885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.537891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.537899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.537907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.537915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.537921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.537930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.537937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.537945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.537953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.537961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.537968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.537977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.537986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.537994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.247 [2024-06-11 03:48:30.538340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.247 [2024-06-11 03:48:30.538347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.538373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:51:49.248 [2024-06-11 03:48:30.538428] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x222c7c0 was disconnected and freed. reset controller. 00:51:49.248 [2024-06-11 03:48:30.538517] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc1610 (9): Bad file descriptor 00:51:49.248 [2024-06-11 03:48:30.538541] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2187a90 (9): Bad file descriptor 00:51:49.248 [2024-06-11 03:48:30.538556] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21805d0 (9): Bad file descriptor 00:51:49.248 [2024-06-11 03:48:30.538583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.248 [2024-06-11 03:48:30.538593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.538600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.248 [2024-06-11 03:48:30.538608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.538615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.248 [2024-06-11 03:48:30.538622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.538630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.248 [2024-06-11 03:48:30.538636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.538643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f820 is same with the state(5) to be set 00:51:49.248 [2024-06-11 03:48:30.538653] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226dbc0 (9): Bad file descriptor 00:51:49.248 [2024-06-11 03:48:30.538664] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c9190 (9): Bad file descriptor 00:51:49.248 [2024-06-11 03:48:30.538678] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c926f0 (9): Bad file descriptor 00:51:49.248 [2024-06-11 03:48:30.538690] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b3fc0 (9): Bad file descriptor 00:51:49.248 [2024-06-11 03:48:30.538703] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d2c00 (9): Bad file descriptor 00:51:49.248 [2024-06-11 03:48:30.538716] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22720f0 (9): Bad file descriptor 00:51:49.248 [2024-06-11 03:48:30.538811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.538822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.538835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.538843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.538851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.538858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.538867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.538874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.538883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.538893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.538901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.538909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.538917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.538924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.538932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.538939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.538948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.538955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.538963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.538970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.538979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.538986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.538994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.248 [2024-06-11 03:48:30.539293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.248 [2024-06-11 03:48:30.539301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.539822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.539830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c96b30 is same with the state(5) to be set 00:51:49.249 [2024-06-11 03:48:30.539883] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c96b30 was disconnected and freed. reset controller. 00:51:49.249 [2024-06-11 03:48:30.540067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.540084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.540095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.540103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.540112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.540122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.540131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.540138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.249 [2024-06-11 03:48:30.540146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.249 [2024-06-11 03:48:30.540154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.250 [2024-06-11 03:48:30.540775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.250 [2024-06-11 03:48:30.540782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.540791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.540797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.540807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.540815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.540823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.540830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.540839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.540846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.540855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.540862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.540870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.540878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.540887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.540893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.540902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.540910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.540919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.540926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.540934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.540941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.540950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.540957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.540964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.540972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.540980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.540987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.540996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.541003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.541017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.541025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.541034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.541041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.541050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.541057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.541065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.541072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.541081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.541089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.541175] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2249770 was disconnected and freed. reset controller. 00:51:49.251 [2024-06-11 03:48:30.544312] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:51:49.251 [2024-06-11 03:48:30.544914] nvme_tcp.c:1222:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:51:49.251 [2024-06-11 03:48:30.544939] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:51:49.251 [2024-06-11 03:48:30.544952] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:51:49.251 [2024-06-11 03:48:30.544969] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226f820 (9): Bad file descriptor 00:51:49.251 [2024-06-11 03:48:30.545183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.251 [2024-06-11 03:48:30.545197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b3fc0 with addr=10.0.0.2, port=4420 00:51:49.251 [2024-06-11 03:48:30.545204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b3fc0 is same with the state(5) to be set 00:51:49.251 [2024-06-11 03:48:30.545250] nvme_tcp.c:1222:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:51:49.251 [2024-06-11 03:48:30.545506] nvme_tcp.c:1222:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:51:49.251 [2024-06-11 03:48:30.545552] nvme_tcp.c:1222:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:51:49.251 [2024-06-11 03:48:30.545598] nvme_tcp.c:1222:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:51:49.251 [2024-06-11 03:48:30.545849] nvme_tcp.c:1222:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:51:49.251 [2024-06-11 03:48:30.545893] nvme_tcp.c:1222:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:51:49.251 [2024-06-11 03:48:30.546060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.251 [2024-06-11 03:48:30.546076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21805d0 with addr=10.0.0.2, port=4420 00:51:49.251 [2024-06-11 03:48:30.546084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21805d0 is same with the state(5) to be set 00:51:49.251 [2024-06-11 03:48:30.546104] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b3fc0 (9): Bad file descriptor 00:51:49.251 [2024-06-11 03:48:30.546383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.251 [2024-06-11 03:48:30.546396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226f820 with addr=10.0.0.2, port=4420 00:51:49.251 [2024-06-11 03:48:30.546403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f820 is same with the state(5) to be set 00:51:49.251 [2024-06-11 03:48:30.546412] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21805d0 (9): Bad file descriptor 00:51:49.251 [2024-06-11 03:48:30.546422] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:51:49.251 [2024-06-11 03:48:30.546430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:51:49.251 [2024-06-11 03:48:30.546438] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:51:49.251 [2024-06-11 03:48:30.546489] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.251 [2024-06-11 03:48:30.546500] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226f820 (9): Bad file descriptor 00:51:49.251 [2024-06-11 03:48:30.546508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:51:49.251 [2024-06-11 03:48:30.546515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:51:49.251 [2024-06-11 03:48:30.546521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:51:49.251 [2024-06-11 03:48:30.546554] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.251 [2024-06-11 03:48:30.546562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:51:49.251 [2024-06-11 03:48:30.546567] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:51:49.251 [2024-06-11 03:48:30.546577] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:51:49.251 [2024-06-11 03:48:30.546610] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.251 [2024-06-11 03:48:30.548651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.548667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.548680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.548688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.548697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.548704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.548713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.548721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.548729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.548737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.251 [2024-06-11 03:48:30.548745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.251 [2024-06-11 03:48:30.548751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.548760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.548768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.548776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.548783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.548792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.548799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.548807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.548815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.548824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.548832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.548841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.548851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.548861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.548869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.548877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.548884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.548893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.548899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.548908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.548915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.548923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.548930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.548939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.548946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.548954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.548962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.548970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.548977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.548986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.548992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.252 [2024-06-11 03:48:30.549282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.252 [2024-06-11 03:48:30.549291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.549672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.549680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c958b0 is same with the state(5) to be set 00:51:49.253 [2024-06-11 03:48:30.550672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.550685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.550695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.550702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.550711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.550719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.550728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.550736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.550744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.550751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.550760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.550768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.550778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.550785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.550795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.550802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.550810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.550817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.550826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.550834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.550848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.550855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.550865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.550872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.550881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.550888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.550898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.550905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.253 [2024-06-11 03:48:30.550914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.253 [2024-06-11 03:48:30.550922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.550930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.550937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.550946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.550952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.550961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.550968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.550976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.550983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.550992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.550999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.254 [2024-06-11 03:48:30.551553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.254 [2024-06-11 03:48:30.551561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.551570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.551576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.551585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.551592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.551601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.551608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.551616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.551623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.551630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.551638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.551648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.551654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.551663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.551669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.551678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.551685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.551693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.551700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.551708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2245890 is same with the state(5) to be set 00:51:49.255 [2024-06-11 03:48:30.552694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.552985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.552996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.553003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.553016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.553023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.553032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.553044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.553052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.553059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.553068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.553075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.553083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.553090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.553099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.553107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.553116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.553123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.553132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.553139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.553147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.553154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.553163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.553170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.553178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.255 [2024-06-11 03:48:30.553185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.255 [2024-06-11 03:48:30.553193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.256 [2024-06-11 03:48:30.553707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.256 [2024-06-11 03:48:30.553717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.553724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.553732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246df0 is same with the state(5) to be set 00:51:49.257 [2024-06-11 03:48:30.554713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.554991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.554999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.257 [2024-06-11 03:48:30.555354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.257 [2024-06-11 03:48:30.555361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.555759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.555767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22482c0 is same with the state(5) to be set 00:51:49.258 [2024-06-11 03:48:30.556761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.556779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.556789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.556797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.556806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.556814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.556823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.556830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.556839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.556852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.556862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.556869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.556877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.556886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.556895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.556902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.556911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.556919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.556928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.556935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.556943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.556951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.556959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.556966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.556975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.556982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.556991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.258 [2024-06-11 03:48:30.556998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.258 [2024-06-11 03:48:30.557007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.259 [2024-06-11 03:48:30.557647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.259 [2024-06-11 03:48:30.557656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.557665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.557674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.557682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.557692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.557698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.557708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.557714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.557724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.557731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.557741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.557748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.557757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.557766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.557775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.557784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.557792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.557800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.557809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.557816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.557824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2229f90 is same with the state(5) to be set 00:51:49.260 [2024-06-11 03:48:30.558836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.558852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.558863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.558872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.558882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.558892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.558901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.558909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.558918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.558925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.558934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.558941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.558950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.558958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.558966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.558973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.558982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.558988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.558997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.260 [2024-06-11 03:48:30.559317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.260 [2024-06-11 03:48:30.559325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.261 [2024-06-11 03:48:30.559863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.261 [2024-06-11 03:48:30.559871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.559879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.559886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222b440 is same with the state(5) to be set 00:51:49.262 [2024-06-11 03:48:30.564331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.564986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.564992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.262 [2024-06-11 03:48:30.565001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.262 [2024-06-11 03:48:30.565026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:49.263 [2024-06-11 03:48:30.565434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.263 [2024-06-11 03:48:30.565442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a4a50 is same with the state(5) to be set 00:51:49.263 [2024-06-11 03:48:30.570115] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:51:49.263 [2024-06-11 03:48:30.570139] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:51:49.263 [2024-06-11 03:48:30.570147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:51:49.263 [2024-06-11 03:48:30.570156] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:51:49.263 [2024-06-11 03:48:30.570224] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:51:49.263 [2024-06-11 03:48:30.570240] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:51:49.263 [2024-06-11 03:48:30.570251] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:51:49.263 [2024-06-11 03:48:30.570320] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:51:49.263 [2024-06-11 03:48:30.570332] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:51:49.263 task offset: 16384 on job bdev=Nvme9n1 fails 00:51:49.263 00:51:49.263 Latency(us) 00:51:49.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:49.263 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:49.263 Job: Nvme1n1 ended in about 0.65 seconds with error 00:51:49.263 Verification LBA range: start 0x0 length 0x400 00:51:49.263 Nvme1n1 : 0.65 197.51 12.34 98.75 0.00 213095.21 15541.39 210713.84 00:51:49.263 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:49.263 Job: Nvme2n1 ended in about 0.64 seconds with error 00:51:49.263 Verification LBA range: start 0x0 length 0x400 00:51:49.263 Nvme2n1 : 0.64 199.81 12.49 99.90 0.00 205432.04 5929.45 231685.36 00:51:49.263 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:49.263 Job: Nvme3n1 ended in about 0.65 seconds with error 00:51:49.263 Verification LBA range: start 0x0 length 0x400 00:51:49.263 Nvme3n1 : 0.65 196.90 12.31 98.45 0.00 203542.43 14667.58 198730.12 00:51:49.263 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:49.263 Job: Nvme4n1 ended in about 0.65 seconds with error 00:51:49.263 Verification LBA range: start 0x0 length 0x400 00:51:49.263 Nvme4n1 : 0.65 196.29 12.27 98.14 0.00 199023.34 23717.79 200727.41 00:51:49.263 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:49.263 Job: Nvme5n1 ended in about 0.65 seconds with error 00:51:49.263 Verification LBA range: start 0x0 length 0x400 00:51:49.263 Nvme5n1 : 0.65 195.68 12.23 97.84 0.00 194602.91 18849.40 189742.32 00:51:49.263 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:49.263 Job: Nvme6n1 ended in about 0.64 seconds with error 00:51:49.263 Verification LBA range: start 0x0 length 0x400 00:51:49.263 Nvme6n1 : 0.64 199.49 12.47 99.74 0.00 185357.57 6647.22 208716.56 00:51:49.263 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:49.263 Job: Nvme7n1 ended in about 0.66 seconds with error 00:51:49.263 Verification LBA range: start 0x0 length 0x400 00:51:49.263 Nvme7n1 : 0.66 195.06 12.19 97.53 0.00 185035.17 18225.25 207717.91 00:51:49.263 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:49.263 Job: Nvme8n1 ended in about 0.66 seconds with error 00:51:49.263 Verification LBA range: start 0x0 length 0x400 00:51:49.263 Nvme8n1 : 0.66 194.45 12.15 97.23 0.00 180604.02 18974.23 192738.26 00:51:49.263 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:49.263 Job: Nvme9n1 ended in about 0.64 seconds with error 00:51:49.263 Verification LBA range: start 0x0 length 0x400 00:51:49.264 Nvme9n1 : 0.64 200.11 12.51 100.05 0.00 168968.78 22219.82 220700.28 00:51:49.264 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:51:49.264 Job: Nvme10n1 ended in about 0.66 seconds with error 00:51:49.264 Verification LBA range: start 0x0 length 0x400 00:51:49.264 Nvme10n1 : 0.66 96.42 6.03 96.42 0.00 257623.77 20347.37 237677.23 00:51:49.264 =================================================================================================================== 00:51:49.264 Total : 1871.70 116.98 984.06 0.00 197318.34 5929.45 237677.23 00:51:49.264 [2024-06-11 03:48:30.595485] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:51:49.264 [2024-06-11 03:48:30.595529] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:51:49.264 [2024-06-11 03:48:30.595739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.264 [2024-06-11 03:48:30.595757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c926f0 with addr=10.0.0.2, port=4420 00:51:49.264 [2024-06-11 03:48:30.595768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c926f0 is same with the state(5) to be set 00:51:49.264 [2024-06-11 03:48:30.595896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.264 [2024-06-11 03:48:30.595907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20d2c00 with addr=10.0.0.2, port=4420 00:51:49.264 [2024-06-11 03:48:30.595915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d2c00 is same with the state(5) to be set 00:51:49.264 [2024-06-11 03:48:30.596031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.264 [2024-06-11 03:48:30.596043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c9190 with addr=10.0.0.2, port=4420 00:51:49.264 [2024-06-11 03:48:30.596051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c9190 is same with the state(5) to be set 00:51:49.264 [2024-06-11 03:48:30.596171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.264 [2024-06-11 03:48:30.596183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2187a90 with addr=10.0.0.2, port=4420 00:51:49.264 [2024-06-11 03:48:30.596191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2187a90 is same with the state(5) to be set 00:51:49.264 [2024-06-11 03:48:30.597709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:51:49.264 [2024-06-11 03:48:30.597725] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:51:49.264 [2024-06-11 03:48:30.597903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.264 [2024-06-11 03:48:30.597918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc1610 with addr=10.0.0.2, port=4420 00:51:49.264 [2024-06-11 03:48:30.597927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1610 is same with the state(5) to be set 00:51:49.264 [2024-06-11 03:48:30.598103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.264 [2024-06-11 03:48:30.598115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22720f0 with addr=10.0.0.2, port=4420 00:51:49.264 [2024-06-11 03:48:30.598123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22720f0 is same with the state(5) to be set 00:51:49.264 [2024-06-11 03:48:30.598217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.264 [2024-06-11 03:48:30.598229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226dbc0 with addr=10.0.0.2, port=4420 00:51:49.264 [2024-06-11 03:48:30.598243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226dbc0 is same with the state(5) to be set 00:51:49.264 [2024-06-11 03:48:30.598255] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c926f0 (9): Bad file descriptor 00:51:49.264 [2024-06-11 03:48:30.598266] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d2c00 (9): Bad file descriptor 00:51:49.264 [2024-06-11 03:48:30.598275] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c9190 (9): Bad file descriptor 00:51:49.264 [2024-06-11 03:48:30.598285] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2187a90 (9): Bad file descriptor 00:51:49.264 [2024-06-11 03:48:30.598314] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:51:49.264 [2024-06-11 03:48:30.598329] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:51:49.264 [2024-06-11 03:48:30.598340] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:51:49.264 [2024-06-11 03:48:30.598350] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:51:49.264 [2024-06-11 03:48:30.598359] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:51:49.264 [2024-06-11 03:48:30.598422] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:51:49.264 [2024-06-11 03:48:30.598599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.264 [2024-06-11 03:48:30.598612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b3fc0 with addr=10.0.0.2, port=4420 00:51:49.264 [2024-06-11 03:48:30.598620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b3fc0 is same with the state(5) to be set 00:51:49.264 [2024-06-11 03:48:30.598759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.264 [2024-06-11 03:48:30.598771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21805d0 with addr=10.0.0.2, port=4420 00:51:49.264 [2024-06-11 03:48:30.598779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21805d0 is same with the state(5) to be set 00:51:49.264 [2024-06-11 03:48:30.598787] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc1610 (9): Bad file descriptor 00:51:49.264 [2024-06-11 03:48:30.598795] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22720f0 (9): Bad file descriptor 00:51:49.264 [2024-06-11 03:48:30.598805] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226dbc0 (9): Bad file descriptor 00:51:49.264 [2024-06-11 03:48:30.598813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:51:49.264 [2024-06-11 03:48:30.598820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:51:49.264 [2024-06-11 03:48:30.598829] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:51:49.264 [2024-06-11 03:48:30.598839] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:51:49.264 [2024-06-11 03:48:30.598846] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:51:49.264 [2024-06-11 03:48:30.598854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:51:49.264 [2024-06-11 03:48:30.598863] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:51:49.264 [2024-06-11 03:48:30.598871] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:51:49.264 [2024-06-11 03:48:30.598878] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:51:49.264 [2024-06-11 03:48:30.598886] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:51:49.264 [2024-06-11 03:48:30.598898] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:51:49.264 [2024-06-11 03:48:30.598904] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:51:49.264 [2024-06-11 03:48:30.598974] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.264 [2024-06-11 03:48:30.598983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.264 [2024-06-11 03:48:30.598989] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.264 [2024-06-11 03:48:30.598995] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.264 [2024-06-11 03:48:30.599193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.264 [2024-06-11 03:48:30.599204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226f820 with addr=10.0.0.2, port=4420 00:51:49.264 [2024-06-11 03:48:30.599212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226f820 is same with the state(5) to be set 00:51:49.264 [2024-06-11 03:48:30.599221] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b3fc0 (9): Bad file descriptor 00:51:49.264 [2024-06-11 03:48:30.599230] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21805d0 (9): Bad file descriptor 00:51:49.264 [2024-06-11 03:48:30.599239] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:51:49.264 [2024-06-11 03:48:30.599245] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:51:49.264 [2024-06-11 03:48:30.599252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:51:49.264 [2024-06-11 03:48:30.599261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:51:49.264 [2024-06-11 03:48:30.599267] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:51:49.264 [2024-06-11 03:48:30.599274] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:51:49.264 [2024-06-11 03:48:30.599283] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:51:49.264 [2024-06-11 03:48:30.599289] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:51:49.264 [2024-06-11 03:48:30.599296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:51:49.264 [2024-06-11 03:48:30.599323] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.264 [2024-06-11 03:48:30.599332] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.264 [2024-06-11 03:48:30.599339] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.264 [2024-06-11 03:48:30.599346] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226f820 (9): Bad file descriptor 00:51:49.264 [2024-06-11 03:48:30.599354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:51:49.264 [2024-06-11 03:48:30.599361] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:51:49.264 [2024-06-11 03:48:30.599367] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:51:49.264 [2024-06-11 03:48:30.599376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:51:49.264 [2024-06-11 03:48:30.599382] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:51:49.264 [2024-06-11 03:48:30.599390] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:51:49.264 [2024-06-11 03:48:30.599416] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.264 [2024-06-11 03:48:30.599425] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.264 [2024-06-11 03:48:30.599433] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:51:49.264 [2024-06-11 03:48:30.599439] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:51:49.265 [2024-06-11 03:48:30.599446] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:51:49.265 [2024-06-11 03:48:30.599470] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.523 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:51:49.523 03:48:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2299027 00:51:50.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2299027) - No such process 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:51:50.902 rmmod nvme_tcp 00:51:50.902 rmmod nvme_fabrics 00:51:50.902 rmmod nvme_keyring 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:51:50.902 03:48:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:52.808 03:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:51:52.808 00:51:52.808 real 0m7.042s 00:51:52.808 user 0m16.159s 00:51:52.808 sys 0m1.182s 00:51:52.808 03:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:51:52.808 03:48:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:51:52.808 ************************************ 00:51:52.808 END TEST nvmf_shutdown_tc3 00:51:52.808 ************************************ 00:51:52.808 03:48:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:51:52.808 00:51:52.808 real 0m30.451s 00:51:52.808 user 1m13.346s 00:51:52.808 sys 0m8.479s 00:51:52.808 03:48:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1125 -- # xtrace_disable 00:51:52.808 03:48:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:51:52.808 ************************************ 00:51:52.808 END TEST nvmf_shutdown 00:51:52.808 ************************************ 00:51:52.808 03:48:34 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:51:52.808 03:48:34 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:51:52.808 03:48:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:51:52.808 03:48:34 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:51:52.808 03:48:34 nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:51:52.808 03:48:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:51:52.808 03:48:34 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:51:52.808 03:48:34 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:51:52.808 03:48:34 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:51:52.808 03:48:34 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:51:52.808 03:48:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:51:53.067 ************************************ 00:51:53.067 START TEST nvmf_multicontroller 00:51:53.067 ************************************ 00:51:53.067 03:48:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:51:53.067 * Looking for test storage... 00:51:53.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:51:53.067 03:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:51:53.067 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:51:53.067 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:51:53.067 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:51:53.067 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:51:53.067 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:51:53.067 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:51:53.067 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:51:53.067 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:51:53.068 03:48:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:51:59.638 Found 0000:86:00.0 (0x8086 - 0x159b) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:51:59.638 Found 0000:86:00.1 (0x8086 - 0x159b) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:51:59.638 Found net devices under 0000:86:00.0: cvl_0_0 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:51:59.638 Found net devices under 0000:86:00.1: cvl_0_1 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:51:59.638 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:51:59.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:59.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:51:59.639 00:51:59.639 --- 10.0.0.2 ping statistics --- 00:51:59.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:59.639 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:51:59.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:59.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:51:59.639 00:51:59.639 --- 10.0.0.1 ping statistics --- 00:51:59.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:59.639 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@723 -- # xtrace_disable 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2303364 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2303364 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 2303364 ']' 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:59.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.639 [2024-06-11 03:48:40.411497] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:51:59.639 [2024-06-11 03:48:40.411545] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:51:59.639 EAL: No free 2048 kB hugepages reported on node 1 00:51:59.639 [2024-06-11 03:48:40.476879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:51:59.639 [2024-06-11 03:48:40.518376] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:51:59.639 [2024-06-11 03:48:40.518411] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:51:59.639 [2024-06-11 03:48:40.518418] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:51:59.639 [2024-06-11 03:48:40.518424] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:51:59.639 [2024-06-11 03:48:40.518429] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:51:59.639 [2024-06-11 03:48:40.518536] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:51:59.639 [2024-06-11 03:48:40.518624] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:51:59.639 [2024-06-11 03:48:40.518625] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@729 -- # xtrace_disable 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.639 [2024-06-11 03:48:40.655750] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.639 Malloc0 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.639 [2024-06-11 03:48:40.717757] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.639 [2024-06-11 03:48:40.725700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.639 Malloc1 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2303589 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2303589 /var/tmp/bdevperf.sock 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # '[' -z 2303589 ']' 00:51:59.639 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:51:59.640 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local max_retries=100 00:51:59.640 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:51:59.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:51:59.640 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@839 -- # xtrace_disable 00:51:59.640 03:48:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.640 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:51:59.640 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@863 -- # return 0 00:51:59.640 03:48:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:51:59.640 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.640 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.899 NVMe0n1 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:51:59.899 1 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.899 request: 00:51:59.899 { 00:51:59.899 "name": "NVMe0", 00:51:59.899 "trtype": "tcp", 00:51:59.899 "traddr": "10.0.0.2", 00:51:59.899 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:51:59.899 "hostaddr": "10.0.0.2", 00:51:59.899 "hostsvcid": "60000", 00:51:59.899 "adrfam": "ipv4", 00:51:59.899 "trsvcid": "4420", 00:51:59.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:51:59.899 "method": "bdev_nvme_attach_controller", 00:51:59.899 "req_id": 1 00:51:59.899 } 00:51:59.899 Got JSON-RPC error response 00:51:59.899 response: 00:51:59.899 { 00:51:59.899 "code": -114, 00:51:59.899 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:51:59.899 } 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.899 request: 00:51:59.899 { 00:51:59.899 "name": "NVMe0", 00:51:59.899 "trtype": "tcp", 00:51:59.899 "traddr": "10.0.0.2", 00:51:59.899 "hostaddr": "10.0.0.2", 00:51:59.899 "hostsvcid": "60000", 00:51:59.899 "adrfam": "ipv4", 00:51:59.899 "trsvcid": "4420", 00:51:59.899 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:51:59.899 "method": "bdev_nvme_attach_controller", 00:51:59.899 "req_id": 1 00:51:59.899 } 00:51:59.899 Got JSON-RPC error response 00:51:59.899 response: 00:51:59.899 { 00:51:59.899 "code": -114, 00:51:59.899 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:51:59.899 } 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.899 request: 00:51:59.899 { 00:51:59.899 "name": "NVMe0", 00:51:59.899 "trtype": "tcp", 00:51:59.899 "traddr": "10.0.0.2", 00:51:59.899 "hostaddr": "10.0.0.2", 00:51:59.899 "hostsvcid": "60000", 00:51:59.899 "adrfam": "ipv4", 00:51:59.899 "trsvcid": "4420", 00:51:59.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:51:59.899 "multipath": "disable", 00:51:59.899 "method": "bdev_nvme_attach_controller", 00:51:59.899 "req_id": 1 00:51:59.899 } 00:51:59.899 Got JSON-RPC error response 00:51:59.899 response: 00:51:59.899 { 00:51:59.899 "code": -114, 00:51:59.899 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:51:59.899 } 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.899 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:51:59.899 request: 00:51:59.899 { 00:51:59.899 "name": "NVMe0", 00:51:59.899 "trtype": "tcp", 00:51:59.899 "traddr": "10.0.0.2", 00:51:59.899 "hostaddr": "10.0.0.2", 00:51:59.899 "hostsvcid": "60000", 00:51:59.899 "adrfam": "ipv4", 00:51:59.899 "trsvcid": "4420", 00:51:59.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:51:59.900 "multipath": "failover", 00:51:59.900 "method": "bdev_nvme_attach_controller", 00:51:59.900 "req_id": 1 00:51:59.900 } 00:51:59.900 Got JSON-RPC error response 00:51:59.900 response: 00:51:59.900 { 00:51:59.900 "code": -114, 00:51:59.900 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:51:59.900 } 00:51:59.900 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:51:59.900 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:51:59.900 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:51:59.900 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:51:59.900 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:51:59.900 03:48:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:51:59.900 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:51:59.900 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:52:00.158 00:52:00.158 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:00.158 03:48:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:52:00.158 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:00.158 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:52:00.158 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:00.158 03:48:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:52:00.158 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:00.158 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:52:00.416 00:52:00.416 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:00.416 03:48:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:52:00.416 03:48:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:52:00.416 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:00.416 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:52:00.416 03:48:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:00.416 03:48:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:52:00.416 03:48:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:52:01.352 0 00:52:01.611 03:48:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:52:01.611 03:48:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:01.611 03:48:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:52:01.611 03:48:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:01.611 03:48:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2303589 00:52:01.611 03:48:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 2303589 ']' 00:52:01.611 03:48:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 2303589 00:52:01.611 03:48:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:52:01.611 03:48:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:52:01.611 03:48:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2303589 00:52:01.611 03:48:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:52:01.611 03:48:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:52:01.611 03:48:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2303589' 00:52:01.611 killing process with pid 2303589 00:52:01.611 03:48:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 2303589 00:52:01.611 03:48:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 2303589 00:52:01.611 03:48:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:52:01.611 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:01.611 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # sort -u 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # cat 00:52:01.870 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:52:01.870 [2024-06-11 03:48:40.827253] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:52:01.870 [2024-06-11 03:48:40.827303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2303589 ] 00:52:01.870 EAL: No free 2048 kB hugepages reported on node 1 00:52:01.870 [2024-06-11 03:48:40.885316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:01.870 [2024-06-11 03:48:40.927053] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:52:01.870 [2024-06-11 03:48:41.640978] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 8b1ae65e-81e0-4887-993a-b169880f4c8c already exists 00:52:01.870 [2024-06-11 03:48:41.641007] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:8b1ae65e-81e0-4887-993a-b169880f4c8c alias for bdev NVMe1n1 00:52:01.870 [2024-06-11 03:48:41.641024] bdev_nvme.c:4308:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:52:01.870 Running I/O for 1 seconds... 00:52:01.870 00:52:01.870 Latency(us) 00:52:01.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:01.870 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:52:01.870 NVMe0n1 : 1.01 24053.42 93.96 0.00 0.00 5304.87 4962.01 10610.59 00:52:01.870 =================================================================================================================== 00:52:01.870 Total : 24053.42 93.96 0.00 0.00 5304.87 4962.01 10610.59 00:52:01.870 Received shutdown signal, test time was about 1.000000 seconds 00:52:01.870 00:52:01.870 Latency(us) 00:52:01.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:01.870 =================================================================================================================== 00:52:01.870 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:52:01.870 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1617 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # read -r file 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:52:01.870 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:52:01.870 rmmod nvme_tcp 00:52:01.870 rmmod nvme_fabrics 00:52:01.870 rmmod nvme_keyring 00:52:01.871 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:52:01.871 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:52:01.871 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:52:01.871 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2303364 ']' 00:52:01.871 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2303364 00:52:01.871 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@949 -- # '[' -z 2303364 ']' 00:52:01.871 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # kill -0 2303364 00:52:01.871 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # uname 00:52:01.871 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:52:01.871 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2303364 00:52:01.871 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:52:01.871 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:52:01.871 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2303364' 00:52:01.871 killing process with pid 2303364 00:52:01.871 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@968 -- # kill 2303364 00:52:01.871 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@973 -- # wait 2303364 00:52:02.130 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:52:02.130 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:52:02.130 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:52:02.130 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:52:02.130 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:52:02.130 03:48:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:02.130 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:52:02.130 03:48:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:04.034 03:48:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:52:04.034 00:52:04.034 real 0m11.203s 00:52:04.034 user 0m12.386s 00:52:04.034 sys 0m5.238s 00:52:04.034 03:48:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # xtrace_disable 00:52:04.034 03:48:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:52:04.034 ************************************ 00:52:04.034 END TEST nvmf_multicontroller 00:52:04.034 ************************************ 00:52:04.293 03:48:45 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:52:04.293 03:48:45 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:52:04.293 03:48:45 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:52:04.293 03:48:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:52:04.293 ************************************ 00:52:04.293 START TEST nvmf_aer 00:52:04.293 ************************************ 00:52:04.293 03:48:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:52:04.293 * Looking for test storage... 00:52:04.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:52:04.293 03:48:45 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:52:04.293 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:52:04.293 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:52:04.293 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:52:04.293 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:52:04.293 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:52:04.293 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:52:04.293 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:52:04.293 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:52:04.293 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:52:04.293 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:52:04.293 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:52:04.293 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:52:04.294 03:48:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:52:10.907 Found 0000:86:00.0 (0x8086 - 0x159b) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:52:10.907 Found 0000:86:00.1 (0x8086 - 0x159b) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:52:10.907 Found net devices under 0000:86:00.0: cvl_0_0 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:52:10.907 Found net devices under 0000:86:00.1: cvl_0_1 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:52:10.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:52:10.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:52:10.907 00:52:10.907 --- 10.0.0.2 ping statistics --- 00:52:10.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:10.907 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:52:10.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:52:10.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:52:10.907 00:52:10.907 --- 10.0.0.1 ping statistics --- 00:52:10.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:10.907 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@723 -- # xtrace_disable 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2307657 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2307657 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # '[' -z 2307657 ']' 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # local max_retries=100 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:10.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@839 -- # xtrace_disable 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:10.907 [2024-06-11 03:48:51.654955] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:52:10.907 [2024-06-11 03:48:51.654999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:52:10.907 EAL: No free 2048 kB hugepages reported on node 1 00:52:10.907 [2024-06-11 03:48:51.718508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:52:10.907 [2024-06-11 03:48:51.761002] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:52:10.907 [2024-06-11 03:48:51.761043] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:52:10.907 [2024-06-11 03:48:51.761053] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:52:10.907 [2024-06-11 03:48:51.761061] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:52:10.907 [2024-06-11 03:48:51.761067] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:52:10.907 [2024-06-11 03:48:51.761121] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:52:10.907 [2024-06-11 03:48:51.761222] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:52:10.907 [2024-06-11 03:48:51.761307] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:52:10.907 [2024-06-11 03:48:51.761310] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@863 -- # return 0 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@729 -- # xtrace_disable 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:10.907 [2024-06-11 03:48:51.904948] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:10.907 Malloc0 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:10.907 [2024-06-11 03:48:51.956293] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:10.907 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:10.907 [ 00:52:10.907 { 00:52:10.907 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:52:10.907 "subtype": "Discovery", 00:52:10.907 "listen_addresses": [], 00:52:10.907 "allow_any_host": true, 00:52:10.907 "hosts": [] 00:52:10.907 }, 00:52:10.907 { 00:52:10.907 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:52:10.907 "subtype": "NVMe", 00:52:10.907 "listen_addresses": [ 00:52:10.907 { 00:52:10.907 "trtype": "TCP", 00:52:10.907 "adrfam": "IPv4", 00:52:10.907 "traddr": "10.0.0.2", 00:52:10.907 "trsvcid": "4420" 00:52:10.907 } 00:52:10.908 ], 00:52:10.908 "allow_any_host": true, 00:52:10.908 "hosts": [], 00:52:10.908 "serial_number": "SPDK00000000000001", 00:52:10.908 "model_number": "SPDK bdev Controller", 00:52:10.908 "max_namespaces": 2, 00:52:10.908 "min_cntlid": 1, 00:52:10.908 "max_cntlid": 65519, 00:52:10.908 "namespaces": [ 00:52:10.908 { 00:52:10.908 "nsid": 1, 00:52:10.908 "bdev_name": "Malloc0", 00:52:10.908 "name": "Malloc0", 00:52:10.908 "nguid": "9549A393C8A646F28EF9B85E67DB3058", 00:52:10.908 "uuid": "9549a393-c8a6-46f2-8ef9-b85e67db3058" 00:52:10.908 } 00:52:10.908 ] 00:52:10.908 } 00:52:10.908 ] 00:52:10.908 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:10.908 03:48:51 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:52:10.908 03:48:51 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:52:10.908 03:48:51 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2307718 00:52:10.908 03:48:51 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:52:10.908 03:48:51 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:52:10.908 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # local i=0 00:52:10.908 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:52:10.908 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 0 -lt 200 ']' 00:52:10.908 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=1 00:52:10.908 03:48:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:52:10.908 EAL: No free 2048 kB hugepages reported on node 1 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' 1 -lt 200 ']' 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # i=2 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # sleep 0.1 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1275 -- # return 0 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:10.908 Malloc1 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:10.908 Asynchronous Event Request test 00:52:10.908 Attaching to 10.0.0.2 00:52:10.908 Attached to 10.0.0.2 00:52:10.908 Registering asynchronous event callbacks... 00:52:10.908 Starting namespace attribute notice tests for all controllers... 00:52:10.908 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:52:10.908 aer_cb - Changed Namespace 00:52:10.908 Cleaning up... 00:52:10.908 [ 00:52:10.908 { 00:52:10.908 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:52:10.908 "subtype": "Discovery", 00:52:10.908 "listen_addresses": [], 00:52:10.908 "allow_any_host": true, 00:52:10.908 "hosts": [] 00:52:10.908 }, 00:52:10.908 { 00:52:10.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:52:10.908 "subtype": "NVMe", 00:52:10.908 "listen_addresses": [ 00:52:10.908 { 00:52:10.908 "trtype": "TCP", 00:52:10.908 "adrfam": "IPv4", 00:52:10.908 "traddr": "10.0.0.2", 00:52:10.908 "trsvcid": "4420" 00:52:10.908 } 00:52:10.908 ], 00:52:10.908 "allow_any_host": true, 00:52:10.908 "hosts": [], 00:52:10.908 "serial_number": "SPDK00000000000001", 00:52:10.908 "model_number": "SPDK bdev Controller", 00:52:10.908 "max_namespaces": 2, 00:52:10.908 "min_cntlid": 1, 00:52:10.908 "max_cntlid": 65519, 00:52:10.908 "namespaces": [ 00:52:10.908 { 00:52:10.908 "nsid": 1, 00:52:10.908 "bdev_name": "Malloc0", 00:52:10.908 "name": "Malloc0", 00:52:10.908 "nguid": "9549A393C8A646F28EF9B85E67DB3058", 00:52:10.908 "uuid": "9549a393-c8a6-46f2-8ef9-b85e67db3058" 00:52:10.908 }, 00:52:10.908 { 00:52:10.908 "nsid": 2, 00:52:10.908 "bdev_name": "Malloc1", 00:52:10.908 "name": "Malloc1", 00:52:10.908 "nguid": "03EEA26745EE431D9D0598D35FDE2B80", 00:52:10.908 "uuid": "03eea267-45ee-431d-9d05-98d35fde2b80" 00:52:10.908 } 00:52:10.908 ] 00:52:10.908 } 00:52:10.908 ] 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2307718 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:10.908 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:52:11.166 rmmod nvme_tcp 00:52:11.166 rmmod nvme_fabrics 00:52:11.166 rmmod nvme_keyring 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2307657 ']' 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2307657 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@949 -- # '[' -z 2307657 ']' 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # kill -0 2307657 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # uname 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2307657 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2307657' 00:52:11.166 killing process with pid 2307657 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@968 -- # kill 2307657 00:52:11.166 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@973 -- # wait 2307657 00:52:11.424 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:52:11.424 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:52:11.424 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:52:11.424 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:52:11.424 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:52:11.424 03:48:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:11.424 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:52:11.424 03:48:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:13.328 03:48:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:52:13.328 00:52:13.328 real 0m9.153s 00:52:13.328 user 0m4.906s 00:52:13.328 sys 0m4.850s 00:52:13.328 03:48:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1125 -- # xtrace_disable 00:52:13.328 03:48:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:52:13.328 ************************************ 00:52:13.328 END TEST nvmf_aer 00:52:13.328 ************************************ 00:52:13.328 03:48:54 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:52:13.328 03:48:54 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:52:13.328 03:48:54 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:52:13.328 03:48:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:52:13.328 ************************************ 00:52:13.328 START TEST nvmf_async_init 00:52:13.328 ************************************ 00:52:13.328 03:48:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:52:13.587 * Looking for test storage... 00:52:13.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:52:13.587 03:48:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:52:13.587 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:52:13.587 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:52:13.587 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:52:13.587 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=dc9622a26b7c4042b3e6d7e4da730450 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:52:13.588 03:48:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:52:20.149 Found 0000:86:00.0 (0x8086 - 0x159b) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:52:20.149 Found 0000:86:00.1 (0x8086 - 0x159b) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:52:20.149 Found net devices under 0000:86:00.0: cvl_0_0 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:52:20.149 Found net devices under 0000:86:00.1: cvl_0_1 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:52:20.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:52:20.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:52:20.149 00:52:20.149 --- 10.0.0.2 ping statistics --- 00:52:20.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:20.149 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:52:20.149 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:52:20.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:52:20.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:52:20.150 00:52:20.150 --- 10.0.0.1 ping statistics --- 00:52:20.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:20.150 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@723 -- # xtrace_disable 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2311688 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2311688 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # '[' -z 2311688 ']' 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:20.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:52:20.150 03:49:00 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.150 [2024-06-11 03:49:00.967917] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:52:20.150 [2024-06-11 03:49:00.967958] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:52:20.150 EAL: No free 2048 kB hugepages reported on node 1 00:52:20.150 [2024-06-11 03:49:01.029237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:20.150 [2024-06-11 03:49:01.068107] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:52:20.150 [2024-06-11 03:49:01.068145] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:52:20.150 [2024-06-11 03:49:01.068155] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:52:20.150 [2024-06-11 03:49:01.068161] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:52:20.150 [2024-06-11 03:49:01.068167] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:52:20.150 [2024-06-11 03:49:01.068194] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@863 -- # return 0 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@729 -- # xtrace_disable 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.150 [2024-06-11 03:49:01.203220] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.150 null0 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g dc9622a26b7c4042b3e6d7e4da730450 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.150 [2024-06-11 03:49:01.243419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.150 nvme0n1 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.150 [ 00:52:20.150 { 00:52:20.150 "name": "nvme0n1", 00:52:20.150 "aliases": [ 00:52:20.150 "dc9622a2-6b7c-4042-b3e6-d7e4da730450" 00:52:20.150 ], 00:52:20.150 "product_name": "NVMe disk", 00:52:20.150 "block_size": 512, 00:52:20.150 "num_blocks": 2097152, 00:52:20.150 "uuid": "dc9622a2-6b7c-4042-b3e6-d7e4da730450", 00:52:20.150 "assigned_rate_limits": { 00:52:20.150 "rw_ios_per_sec": 0, 00:52:20.150 "rw_mbytes_per_sec": 0, 00:52:20.150 "r_mbytes_per_sec": 0, 00:52:20.150 "w_mbytes_per_sec": 0 00:52:20.150 }, 00:52:20.150 "claimed": false, 00:52:20.150 "zoned": false, 00:52:20.150 "supported_io_types": { 00:52:20.150 "read": true, 00:52:20.150 "write": true, 00:52:20.150 "unmap": false, 00:52:20.150 "write_zeroes": true, 00:52:20.150 "flush": true, 00:52:20.150 "reset": true, 00:52:20.150 "compare": true, 00:52:20.150 "compare_and_write": true, 00:52:20.150 "abort": true, 00:52:20.150 "nvme_admin": true, 00:52:20.150 "nvme_io": true 00:52:20.150 }, 00:52:20.150 "memory_domains": [ 00:52:20.150 { 00:52:20.150 "dma_device_id": "system", 00:52:20.150 "dma_device_type": 1 00:52:20.150 } 00:52:20.150 ], 00:52:20.150 "driver_specific": { 00:52:20.150 "nvme": [ 00:52:20.150 { 00:52:20.150 "trid": { 00:52:20.150 "trtype": "TCP", 00:52:20.150 "adrfam": "IPv4", 00:52:20.150 "traddr": "10.0.0.2", 00:52:20.150 "trsvcid": "4420", 00:52:20.150 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:52:20.150 }, 00:52:20.150 "ctrlr_data": { 00:52:20.150 "cntlid": 1, 00:52:20.150 "vendor_id": "0x8086", 00:52:20.150 "model_number": "SPDK bdev Controller", 00:52:20.150 "serial_number": "00000000000000000000", 00:52:20.150 "firmware_revision": "24.09", 00:52:20.150 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:52:20.150 "oacs": { 00:52:20.150 "security": 0, 00:52:20.150 "format": 0, 00:52:20.150 "firmware": 0, 00:52:20.150 "ns_manage": 0 00:52:20.150 }, 00:52:20.150 "multi_ctrlr": true, 00:52:20.150 "ana_reporting": false 00:52:20.150 }, 00:52:20.150 "vs": { 00:52:20.150 "nvme_version": "1.3" 00:52:20.150 }, 00:52:20.150 "ns_data": { 00:52:20.150 "id": 1, 00:52:20.150 "can_share": true 00:52:20.150 } 00:52:20.150 } 00:52:20.150 ], 00:52:20.150 "mp_policy": "active_passive" 00:52:20.150 } 00:52:20.150 } 00:52:20.150 ] 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.150 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.150 [2024-06-11 03:49:01.500801] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:52:20.150 [2024-06-11 03:49:01.500858] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c02e0 (9): Bad file descriptor 00:52:20.410 [2024-06-11 03:49:01.633096] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.410 [ 00:52:20.410 { 00:52:20.410 "name": "nvme0n1", 00:52:20.410 "aliases": [ 00:52:20.410 "dc9622a2-6b7c-4042-b3e6-d7e4da730450" 00:52:20.410 ], 00:52:20.410 "product_name": "NVMe disk", 00:52:20.410 "block_size": 512, 00:52:20.410 "num_blocks": 2097152, 00:52:20.410 "uuid": "dc9622a2-6b7c-4042-b3e6-d7e4da730450", 00:52:20.410 "assigned_rate_limits": { 00:52:20.410 "rw_ios_per_sec": 0, 00:52:20.410 "rw_mbytes_per_sec": 0, 00:52:20.410 "r_mbytes_per_sec": 0, 00:52:20.410 "w_mbytes_per_sec": 0 00:52:20.410 }, 00:52:20.410 "claimed": false, 00:52:20.410 "zoned": false, 00:52:20.410 "supported_io_types": { 00:52:20.410 "read": true, 00:52:20.410 "write": true, 00:52:20.410 "unmap": false, 00:52:20.410 "write_zeroes": true, 00:52:20.410 "flush": true, 00:52:20.410 "reset": true, 00:52:20.410 "compare": true, 00:52:20.410 "compare_and_write": true, 00:52:20.410 "abort": true, 00:52:20.410 "nvme_admin": true, 00:52:20.410 "nvme_io": true 00:52:20.410 }, 00:52:20.410 "memory_domains": [ 00:52:20.410 { 00:52:20.410 "dma_device_id": "system", 00:52:20.410 "dma_device_type": 1 00:52:20.410 } 00:52:20.410 ], 00:52:20.410 "driver_specific": { 00:52:20.410 "nvme": [ 00:52:20.410 { 00:52:20.410 "trid": { 00:52:20.410 "trtype": "TCP", 00:52:20.410 "adrfam": "IPv4", 00:52:20.410 "traddr": "10.0.0.2", 00:52:20.410 "trsvcid": "4420", 00:52:20.410 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:52:20.410 }, 00:52:20.410 "ctrlr_data": { 00:52:20.410 "cntlid": 2, 00:52:20.410 "vendor_id": "0x8086", 00:52:20.410 "model_number": "SPDK bdev Controller", 00:52:20.410 "serial_number": "00000000000000000000", 00:52:20.410 "firmware_revision": "24.09", 00:52:20.410 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:52:20.410 "oacs": { 00:52:20.410 "security": 0, 00:52:20.410 "format": 0, 00:52:20.410 "firmware": 0, 00:52:20.410 "ns_manage": 0 00:52:20.410 }, 00:52:20.410 "multi_ctrlr": true, 00:52:20.410 "ana_reporting": false 00:52:20.410 }, 00:52:20.410 "vs": { 00:52:20.410 "nvme_version": "1.3" 00:52:20.410 }, 00:52:20.410 "ns_data": { 00:52:20.410 "id": 1, 00:52:20.410 "can_share": true 00:52:20.410 } 00:52:20.410 } 00:52:20.410 ], 00:52:20.410 "mp_policy": "active_passive" 00:52:20.410 } 00:52:20.410 } 00:52:20.410 ] 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.9PxRUWhHbM 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.9PxRUWhHbM 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.410 [2024-06-11 03:49:01.693485] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:52:20.410 [2024-06-11 03:49:01.693607] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9PxRUWhHbM 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.410 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.411 [2024-06-11 03:49:01.701500] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:52:20.411 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.411 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9PxRUWhHbM 00:52:20.411 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.411 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.411 [2024-06-11 03:49:01.713535] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:52:20.411 [2024-06-11 03:49:01.713575] nvme_tcp.c:2584:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:52:20.411 nvme0n1 00:52:20.411 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.411 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:52:20.411 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.411 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.411 [ 00:52:20.411 { 00:52:20.411 "name": "nvme0n1", 00:52:20.411 "aliases": [ 00:52:20.411 "dc9622a2-6b7c-4042-b3e6-d7e4da730450" 00:52:20.411 ], 00:52:20.411 "product_name": "NVMe disk", 00:52:20.411 "block_size": 512, 00:52:20.411 "num_blocks": 2097152, 00:52:20.411 "uuid": "dc9622a2-6b7c-4042-b3e6-d7e4da730450", 00:52:20.411 "assigned_rate_limits": { 00:52:20.411 "rw_ios_per_sec": 0, 00:52:20.411 "rw_mbytes_per_sec": 0, 00:52:20.411 "r_mbytes_per_sec": 0, 00:52:20.411 "w_mbytes_per_sec": 0 00:52:20.411 }, 00:52:20.411 "claimed": false, 00:52:20.411 "zoned": false, 00:52:20.411 "supported_io_types": { 00:52:20.411 "read": true, 00:52:20.411 "write": true, 00:52:20.411 "unmap": false, 00:52:20.411 "write_zeroes": true, 00:52:20.411 "flush": true, 00:52:20.411 "reset": true, 00:52:20.411 "compare": true, 00:52:20.411 "compare_and_write": true, 00:52:20.411 "abort": true, 00:52:20.411 "nvme_admin": true, 00:52:20.411 "nvme_io": true 00:52:20.411 }, 00:52:20.411 "memory_domains": [ 00:52:20.411 { 00:52:20.411 "dma_device_id": "system", 00:52:20.411 "dma_device_type": 1 00:52:20.411 } 00:52:20.411 ], 00:52:20.411 "driver_specific": { 00:52:20.411 "nvme": [ 00:52:20.411 { 00:52:20.411 "trid": { 00:52:20.411 "trtype": "TCP", 00:52:20.411 "adrfam": "IPv4", 00:52:20.411 "traddr": "10.0.0.2", 00:52:20.411 "trsvcid": "4421", 00:52:20.411 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:52:20.411 }, 00:52:20.411 "ctrlr_data": { 00:52:20.411 "cntlid": 3, 00:52:20.411 "vendor_id": "0x8086", 00:52:20.411 "model_number": "SPDK bdev Controller", 00:52:20.411 "serial_number": "00000000000000000000", 00:52:20.411 "firmware_revision": "24.09", 00:52:20.411 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:52:20.411 "oacs": { 00:52:20.411 "security": 0, 00:52:20.411 "format": 0, 00:52:20.411 "firmware": 0, 00:52:20.411 "ns_manage": 0 00:52:20.411 }, 00:52:20.411 "multi_ctrlr": true, 00:52:20.411 "ana_reporting": false 00:52:20.411 }, 00:52:20.411 "vs": { 00:52:20.411 "nvme_version": "1.3" 00:52:20.411 }, 00:52:20.411 "ns_data": { 00:52:20.411 "id": 1, 00:52:20.411 "can_share": true 00:52:20.411 } 00:52:20.411 } 00:52:20.411 ], 00:52:20.411 "mp_policy": "active_passive" 00:52:20.411 } 00:52:20.411 } 00:52:20.411 ] 00:52:20.411 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.411 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:52:20.411 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:20.411 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.9PxRUWhHbM 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:52:20.670 rmmod nvme_tcp 00:52:20.670 rmmod nvme_fabrics 00:52:20.670 rmmod nvme_keyring 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2311688 ']' 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2311688 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@949 -- # '[' -z 2311688 ']' 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # kill -0 2311688 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # uname 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2311688 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2311688' 00:52:20.670 killing process with pid 2311688 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@968 -- # kill 2311688 00:52:20.670 [2024-06-11 03:49:01.935801] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:52:20.670 [2024-06-11 03:49:01.935823] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:52:20.670 03:49:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@973 -- # wait 2311688 00:52:20.929 03:49:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:52:20.929 03:49:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:52:20.929 03:49:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:52:20.929 03:49:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:52:20.929 03:49:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:52:20.929 03:49:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:20.929 03:49:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:52:20.929 03:49:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:22.833 03:49:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:52:22.833 00:52:22.833 real 0m9.436s 00:52:22.833 user 0m2.958s 00:52:22.833 sys 0m4.866s 00:52:22.833 03:49:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:52:22.833 03:49:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:52:22.833 ************************************ 00:52:22.833 END TEST nvmf_async_init 00:52:22.833 ************************************ 00:52:22.833 03:49:04 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:52:22.833 03:49:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:52:22.833 03:49:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:52:22.833 03:49:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:52:23.093 ************************************ 00:52:23.093 START TEST dma 00:52:23.093 ************************************ 00:52:23.093 03:49:04 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:52:23.093 * Looking for test storage... 00:52:23.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:52:23.093 03:49:04 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:52:23.093 03:49:04 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:52:23.093 03:49:04 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:23.093 03:49:04 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:23.093 03:49:04 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:23.093 03:49:04 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:23.093 03:49:04 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:23.093 03:49:04 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:52:23.093 03:49:04 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:52:23.093 03:49:04 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:52:23.093 03:49:04 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:52:23.093 03:49:04 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:52:23.093 00:52:23.093 real 0m0.121s 00:52:23.093 user 0m0.052s 00:52:23.093 sys 0m0.076s 00:52:23.093 03:49:04 nvmf_tcp.dma -- common/autotest_common.sh@1125 -- # xtrace_disable 00:52:23.093 03:49:04 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:52:23.093 ************************************ 00:52:23.093 END TEST dma 00:52:23.093 ************************************ 00:52:23.093 03:49:04 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:52:23.093 03:49:04 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:52:23.093 03:49:04 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:52:23.094 03:49:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:52:23.094 ************************************ 00:52:23.094 START TEST nvmf_identify 00:52:23.094 ************************************ 00:52:23.094 03:49:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:52:23.353 * Looking for test storage... 00:52:23.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:52:23.353 03:49:04 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:52:23.353 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:52:23.353 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:52:23.353 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:52:23.353 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:52:23.353 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:52:23.353 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:52:23.353 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:52:23.353 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:52:23.353 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:52:23.354 03:49:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:52:28.625 Found 0000:86:00.0 (0x8086 - 0x159b) 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:52:28.625 Found 0000:86:00.1 (0x8086 - 0x159b) 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:52:28.625 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:52:28.626 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:52:28.884 Found net devices under 0000:86:00.0: cvl_0_0 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:52:28.884 Found net devices under 0000:86:00.1: cvl_0_1 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:52:28.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:52:28.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:52:28.884 00:52:28.884 --- 10.0.0.2 ping statistics --- 00:52:28.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:28.884 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:52:28.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:52:28.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:52:28.884 00:52:28.884 --- 10.0.0.1 ping statistics --- 00:52:28.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:28.884 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:52:28.884 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:52:28.885 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:52:28.885 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:52:28.885 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:52:28.885 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:52:28.885 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:52:28.885 03:49:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:52:29.143 03:49:10 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:52:29.143 03:49:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@723 -- # xtrace_disable 00:52:29.143 03:49:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:52:29.143 03:49:10 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2315577 00:52:29.143 03:49:10 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:52:29.143 03:49:10 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:52:29.143 03:49:10 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2315577 00:52:29.143 03:49:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # '[' -z 2315577 ']' 00:52:29.143 03:49:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:29.143 03:49:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # local max_retries=100 00:52:29.143 03:49:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:29.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:29.143 03:49:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@839 -- # xtrace_disable 00:52:29.143 03:49:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:52:29.143 [2024-06-11 03:49:10.353026] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:52:29.143 [2024-06-11 03:49:10.353084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:52:29.143 EAL: No free 2048 kB hugepages reported on node 1 00:52:29.143 [2024-06-11 03:49:10.420141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:52:29.143 [2024-06-11 03:49:10.463344] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:52:29.143 [2024-06-11 03:49:10.463384] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:52:29.143 [2024-06-11 03:49:10.463393] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:52:29.143 [2024-06-11 03:49:10.463400] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:52:29.143 [2024-06-11 03:49:10.463406] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:52:29.143 [2024-06-11 03:49:10.463454] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:52:29.143 [2024-06-11 03:49:10.463474] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:52:29.143 [2024-06-11 03:49:10.463500] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:52:29.143 [2024-06-11 03:49:10.463501] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@863 -- # return 0 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:52:30.080 [2024-06-11 03:49:11.172050] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@729 -- # xtrace_disable 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:52:30.080 Malloc0 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:52:30.080 [2024-06-11 03:49:11.259866] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:52:30.080 [ 00:52:30.080 { 00:52:30.080 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:52:30.080 "subtype": "Discovery", 00:52:30.080 "listen_addresses": [ 00:52:30.080 { 00:52:30.080 "trtype": "TCP", 00:52:30.080 "adrfam": "IPv4", 00:52:30.080 "traddr": "10.0.0.2", 00:52:30.080 "trsvcid": "4420" 00:52:30.080 } 00:52:30.080 ], 00:52:30.080 "allow_any_host": true, 00:52:30.080 "hosts": [] 00:52:30.080 }, 00:52:30.080 { 00:52:30.080 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:52:30.080 "subtype": "NVMe", 00:52:30.080 "listen_addresses": [ 00:52:30.080 { 00:52:30.080 "trtype": "TCP", 00:52:30.080 "adrfam": "IPv4", 00:52:30.080 "traddr": "10.0.0.2", 00:52:30.080 "trsvcid": "4420" 00:52:30.080 } 00:52:30.080 ], 00:52:30.080 "allow_any_host": true, 00:52:30.080 "hosts": [], 00:52:30.080 "serial_number": "SPDK00000000000001", 00:52:30.080 "model_number": "SPDK bdev Controller", 00:52:30.080 "max_namespaces": 32, 00:52:30.080 "min_cntlid": 1, 00:52:30.080 "max_cntlid": 65519, 00:52:30.080 "namespaces": [ 00:52:30.080 { 00:52:30.080 "nsid": 1, 00:52:30.080 "bdev_name": "Malloc0", 00:52:30.080 "name": "Malloc0", 00:52:30.080 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:52:30.080 "eui64": "ABCDEF0123456789", 00:52:30.080 "uuid": "4a6844c5-5ab7-4473-af08-01fd9a427e1d" 00:52:30.080 } 00:52:30.080 ] 00:52:30.080 } 00:52:30.080 ] 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:30.080 03:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:52:30.080 [2024-06-11 03:49:11.310712] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:52:30.080 [2024-06-11 03:49:11.310758] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315822 ] 00:52:30.080 EAL: No free 2048 kB hugepages reported on node 1 00:52:30.080 [2024-06-11 03:49:11.340342] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:52:30.080 [2024-06-11 03:49:11.340382] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:52:30.080 [2024-06-11 03:49:11.340386] nvme_tcp.c:2337:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:52:30.080 [2024-06-11 03:49:11.340398] nvme_tcp.c:2355:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:52:30.080 [2024-06-11 03:49:11.340406] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:52:30.080 [2024-06-11 03:49:11.340762] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:52:30.080 [2024-06-11 03:49:11.340789] nvme_tcp.c:1550:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xaba990 0 00:52:30.080 [2024-06-11 03:49:11.355019] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:52:30.080 [2024-06-11 03:49:11.355032] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:52:30.080 [2024-06-11 03:49:11.355036] nvme_tcp.c:1596:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:52:30.080 [2024-06-11 03:49:11.355039] nvme_tcp.c:1597:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:52:30.080 [2024-06-11 03:49:11.355072] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.080 [2024-06-11 03:49:11.355077] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.080 [2024-06-11 03:49:11.355081] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaba990) 00:52:30.080 [2024-06-11 03:49:11.355093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:52:30.080 [2024-06-11 03:49:11.355108] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb14240, cid 0, qid 0 00:52:30.080 [2024-06-11 03:49:11.363019] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.080 [2024-06-11 03:49:11.363027] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.080 [2024-06-11 03:49:11.363031] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.080 [2024-06-11 03:49:11.363034] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb14240) on tqpair=0xaba990 00:52:30.080 [2024-06-11 03:49:11.363045] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:52:30.080 [2024-06-11 03:49:11.363052] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:52:30.080 [2024-06-11 03:49:11.363056] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:52:30.080 [2024-06-11 03:49:11.363071] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.080 [2024-06-11 03:49:11.363074] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.080 [2024-06-11 03:49:11.363078] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaba990) 00:52:30.080 [2024-06-11 03:49:11.363084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.080 [2024-06-11 03:49:11.363096] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb14240, cid 0, qid 0 00:52:30.080 [2024-06-11 03:49:11.363307] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.080 [2024-06-11 03:49:11.363312] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.080 [2024-06-11 03:49:11.363315] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.080 [2024-06-11 03:49:11.363319] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb14240) on tqpair=0xaba990 00:52:30.080 [2024-06-11 03:49:11.363325] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:52:30.081 [2024-06-11 03:49:11.363335] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:52:30.081 [2024-06-11 03:49:11.363341] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.363345] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.363348] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaba990) 00:52:30.081 [2024-06-11 03:49:11.363353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.081 [2024-06-11 03:49:11.363363] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb14240, cid 0, qid 0 00:52:30.081 [2024-06-11 03:49:11.363448] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.081 [2024-06-11 03:49:11.363453] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.081 [2024-06-11 03:49:11.363456] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.363459] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb14240) on tqpair=0xaba990 00:52:30.081 [2024-06-11 03:49:11.363464] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:52:30.081 [2024-06-11 03:49:11.363471] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:52:30.081 [2024-06-11 03:49:11.363477] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.363480] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.363483] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaba990) 00:52:30.081 [2024-06-11 03:49:11.363488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.081 [2024-06-11 03:49:11.363497] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb14240, cid 0, qid 0 00:52:30.081 [2024-06-11 03:49:11.363566] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.081 [2024-06-11 03:49:11.363571] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.081 [2024-06-11 03:49:11.363574] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.363578] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb14240) on tqpair=0xaba990 00:52:30.081 [2024-06-11 03:49:11.363582] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:52:30.081 [2024-06-11 03:49:11.363590] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.363593] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.363597] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaba990) 00:52:30.081 [2024-06-11 03:49:11.363602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.081 [2024-06-11 03:49:11.363611] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb14240, cid 0, qid 0 00:52:30.081 [2024-06-11 03:49:11.363684] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.081 [2024-06-11 03:49:11.363689] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.081 [2024-06-11 03:49:11.363692] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.363695] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb14240) on tqpair=0xaba990 00:52:30.081 [2024-06-11 03:49:11.363699] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:52:30.081 [2024-06-11 03:49:11.363704] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:52:30.081 [2024-06-11 03:49:11.363710] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:52:30.081 [2024-06-11 03:49:11.363817] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:52:30.081 [2024-06-11 03:49:11.363821] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:52:30.081 [2024-06-11 03:49:11.363828] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.363831] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.363834] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaba990) 00:52:30.081 [2024-06-11 03:49:11.363840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.081 [2024-06-11 03:49:11.363849] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb14240, cid 0, qid 0 00:52:30.081 [2024-06-11 03:49:11.363923] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.081 [2024-06-11 03:49:11.363928] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.081 [2024-06-11 03:49:11.363931] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.363935] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb14240) on tqpair=0xaba990 00:52:30.081 [2024-06-11 03:49:11.363939] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:52:30.081 [2024-06-11 03:49:11.363947] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.363951] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.363954] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaba990) 00:52:30.081 [2024-06-11 03:49:11.363960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.081 [2024-06-11 03:49:11.363969] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb14240, cid 0, qid 0 00:52:30.081 [2024-06-11 03:49:11.364058] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.081 [2024-06-11 03:49:11.364064] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.081 [2024-06-11 03:49:11.364067] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364070] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb14240) on tqpair=0xaba990 00:52:30.081 [2024-06-11 03:49:11.364074] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:52:30.081 [2024-06-11 03:49:11.364078] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:52:30.081 [2024-06-11 03:49:11.364085] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:52:30.081 [2024-06-11 03:49:11.364092] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:52:30.081 [2024-06-11 03:49:11.364100] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364103] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaba990) 00:52:30.081 [2024-06-11 03:49:11.364109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.081 [2024-06-11 03:49:11.364118] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb14240, cid 0, qid 0 00:52:30.081 [2024-06-11 03:49:11.364219] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:52:30.081 [2024-06-11 03:49:11.364225] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:52:30.081 [2024-06-11 03:49:11.364228] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364233] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaba990): datao=0, datal=4096, cccid=0 00:52:30.081 [2024-06-11 03:49:11.364237] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb14240) on tqpair(0xaba990): expected_datao=0, payload_size=4096 00:52:30.081 [2024-06-11 03:49:11.364241] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364247] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364250] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364270] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.081 [2024-06-11 03:49:11.364275] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.081 [2024-06-11 03:49:11.364278] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364281] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb14240) on tqpair=0xaba990 00:52:30.081 [2024-06-11 03:49:11.364288] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:52:30.081 [2024-06-11 03:49:11.364292] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:52:30.081 [2024-06-11 03:49:11.364298] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:52:30.081 [2024-06-11 03:49:11.364303] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:52:30.081 [2024-06-11 03:49:11.364307] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:52:30.081 [2024-06-11 03:49:11.364311] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:52:30.081 [2024-06-11 03:49:11.364319] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:52:30.081 [2024-06-11 03:49:11.364324] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364328] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364331] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaba990) 00:52:30.081 [2024-06-11 03:49:11.364337] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:52:30.081 [2024-06-11 03:49:11.364347] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb14240, cid 0, qid 0 00:52:30.081 [2024-06-11 03:49:11.364438] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.081 [2024-06-11 03:49:11.364444] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.081 [2024-06-11 03:49:11.364447] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364450] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb14240) on tqpair=0xaba990 00:52:30.081 [2024-06-11 03:49:11.364456] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364459] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364462] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaba990) 00:52:30.081 [2024-06-11 03:49:11.364468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:52:30.081 [2024-06-11 03:49:11.364473] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364476] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364479] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xaba990) 00:52:30.081 [2024-06-11 03:49:11.364484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:52:30.081 [2024-06-11 03:49:11.364491] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364494] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364498] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xaba990) 00:52:30.081 [2024-06-11 03:49:11.364502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:52:30.081 [2024-06-11 03:49:11.364507] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364510] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364513] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.081 [2024-06-11 03:49:11.364518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:52:30.081 [2024-06-11 03:49:11.364522] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:52:30.081 [2024-06-11 03:49:11.364532] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:52:30.081 [2024-06-11 03:49:11.364537] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364541] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaba990) 00:52:30.081 [2024-06-11 03:49:11.364546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.081 [2024-06-11 03:49:11.364556] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb14240, cid 0, qid 0 00:52:30.081 [2024-06-11 03:49:11.364561] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb143c0, cid 1, qid 0 00:52:30.081 [2024-06-11 03:49:11.364565] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb14540, cid 2, qid 0 00:52:30.081 [2024-06-11 03:49:11.364568] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.081 [2024-06-11 03:49:11.364572] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb14840, cid 4, qid 0 00:52:30.081 [2024-06-11 03:49:11.364679] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.081 [2024-06-11 03:49:11.364685] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.081 [2024-06-11 03:49:11.364688] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364691] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb14840) on tqpair=0xaba990 00:52:30.081 [2024-06-11 03:49:11.364696] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:52:30.081 [2024-06-11 03:49:11.364700] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:52:30.081 [2024-06-11 03:49:11.364708] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.081 [2024-06-11 03:49:11.364712] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaba990) 00:52:30.081 [2024-06-11 03:49:11.364717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.081 [2024-06-11 03:49:11.364726] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb14840, cid 4, qid 0 00:52:30.081 [2024-06-11 03:49:11.364808] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:52:30.081 [2024-06-11 03:49:11.364814] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:52:30.081 [2024-06-11 03:49:11.364817] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.364820] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaba990): datao=0, datal=4096, cccid=4 00:52:30.082 [2024-06-11 03:49:11.364824] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb14840) on tqpair(0xaba990): expected_datao=0, payload_size=4096 00:52:30.082 [2024-06-11 03:49:11.364830] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.364851] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.364855] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.405150] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.082 [2024-06-11 03:49:11.405165] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.082 [2024-06-11 03:49:11.405169] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.405172] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb14840) on tqpair=0xaba990 00:52:30.082 [2024-06-11 03:49:11.405185] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:52:30.082 [2024-06-11 03:49:11.405207] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.405211] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaba990) 00:52:30.082 [2024-06-11 03:49:11.405218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.082 [2024-06-11 03:49:11.405224] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.405228] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.405231] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaba990) 00:52:30.082 [2024-06-11 03:49:11.405236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:52:30.082 [2024-06-11 03:49:11.405253] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb14840, cid 4, qid 0 00:52:30.082 [2024-06-11 03:49:11.405258] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb149c0, cid 5, qid 0 00:52:30.082 [2024-06-11 03:49:11.405364] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:52:30.082 [2024-06-11 03:49:11.405370] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:52:30.082 [2024-06-11 03:49:11.405373] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.405377] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaba990): datao=0, datal=1024, cccid=4 00:52:30.082 [2024-06-11 03:49:11.405381] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb14840) on tqpair(0xaba990): expected_datao=0, payload_size=1024 00:52:30.082 [2024-06-11 03:49:11.405384] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.405390] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.405393] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.405398] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.082 [2024-06-11 03:49:11.405402] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.082 [2024-06-11 03:49:11.405406] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.405409] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb149c0) on tqpair=0xaba990 00:52:30.082 [2024-06-11 03:49:11.449020] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.082 [2024-06-11 03:49:11.449034] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.082 [2024-06-11 03:49:11.449037] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.449041] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb14840) on tqpair=0xaba990 00:52:30.082 [2024-06-11 03:49:11.449053] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.449056] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaba990) 00:52:30.082 [2024-06-11 03:49:11.449063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.082 [2024-06-11 03:49:11.449083] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb14840, cid 4, qid 0 00:52:30.082 [2024-06-11 03:49:11.449246] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:52:30.082 [2024-06-11 03:49:11.449252] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:52:30.082 [2024-06-11 03:49:11.449255] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.449258] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaba990): datao=0, datal=3072, cccid=4 00:52:30.082 [2024-06-11 03:49:11.449262] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb14840) on tqpair(0xaba990): expected_datao=0, payload_size=3072 00:52:30.082 [2024-06-11 03:49:11.449266] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.449272] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.449275] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.449321] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.082 [2024-06-11 03:49:11.449327] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.082 [2024-06-11 03:49:11.449330] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.449333] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb14840) on tqpair=0xaba990 00:52:30.082 [2024-06-11 03:49:11.449341] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.449344] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaba990) 00:52:30.082 [2024-06-11 03:49:11.449350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.082 [2024-06-11 03:49:11.449362] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb14840, cid 4, qid 0 00:52:30.082 [2024-06-11 03:49:11.449444] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:52:30.082 [2024-06-11 03:49:11.449450] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:52:30.082 [2024-06-11 03:49:11.449453] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.449456] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaba990): datao=0, datal=8, cccid=4 00:52:30.082 [2024-06-11 03:49:11.449460] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb14840) on tqpair(0xaba990): expected_datao=0, payload_size=8 00:52:30.082 [2024-06-11 03:49:11.449464] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.449469] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:52:30.082 [2024-06-11 03:49:11.449472] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:52:30.345 [2024-06-11 03:49:11.490192] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.345 [2024-06-11 03:49:11.490214] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.345 [2024-06-11 03:49:11.490218] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.345 [2024-06-11 03:49:11.490221] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb14840) on tqpair=0xaba990 00:52:30.345 ===================================================== 00:52:30.345 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:52:30.345 ===================================================== 00:52:30.345 Controller Capabilities/Features 00:52:30.345 ================================ 00:52:30.345 Vendor ID: 0000 00:52:30.345 Subsystem Vendor ID: 0000 00:52:30.345 Serial Number: .................... 00:52:30.345 Model Number: ........................................ 00:52:30.345 Firmware Version: 24.09 00:52:30.345 Recommended Arb Burst: 0 00:52:30.345 IEEE OUI Identifier: 00 00 00 00:52:30.345 Multi-path I/O 00:52:30.345 May have multiple subsystem ports: No 00:52:30.345 May have multiple controllers: No 00:52:30.345 Associated with SR-IOV VF: No 00:52:30.345 Max Data Transfer Size: 131072 00:52:30.345 Max Number of Namespaces: 0 00:52:30.345 Max Number of I/O Queues: 1024 00:52:30.345 NVMe Specification Version (VS): 1.3 00:52:30.345 NVMe Specification Version (Identify): 1.3 00:52:30.345 Maximum Queue Entries: 128 00:52:30.345 Contiguous Queues Required: Yes 00:52:30.345 Arbitration Mechanisms Supported 00:52:30.345 Weighted Round Robin: Not Supported 00:52:30.345 Vendor Specific: Not Supported 00:52:30.345 Reset Timeout: 15000 ms 00:52:30.345 Doorbell Stride: 4 bytes 00:52:30.345 NVM Subsystem Reset: Not Supported 00:52:30.345 Command Sets Supported 00:52:30.345 NVM Command Set: Supported 00:52:30.345 Boot Partition: Not Supported 00:52:30.345 Memory Page Size Minimum: 4096 bytes 00:52:30.345 Memory Page Size Maximum: 4096 bytes 00:52:30.345 Persistent Memory Region: Not Supported 00:52:30.345 Optional Asynchronous Events Supported 00:52:30.345 Namespace Attribute Notices: Not Supported 00:52:30.345 Firmware Activation Notices: Not Supported 00:52:30.345 ANA Change Notices: Not Supported 00:52:30.345 PLE Aggregate Log Change Notices: Not Supported 00:52:30.345 LBA Status Info Alert Notices: Not Supported 00:52:30.345 EGE Aggregate Log Change Notices: Not Supported 00:52:30.345 Normal NVM Subsystem Shutdown event: Not Supported 00:52:30.345 Zone Descriptor Change Notices: Not Supported 00:52:30.345 Discovery Log Change Notices: Supported 00:52:30.345 Controller Attributes 00:52:30.345 128-bit Host Identifier: Not Supported 00:52:30.345 Non-Operational Permissive Mode: Not Supported 00:52:30.345 NVM Sets: Not Supported 00:52:30.345 Read Recovery Levels: Not Supported 00:52:30.345 Endurance Groups: Not Supported 00:52:30.345 Predictable Latency Mode: Not Supported 00:52:30.345 Traffic Based Keep ALive: Not Supported 00:52:30.346 Namespace Granularity: Not Supported 00:52:30.346 SQ Associations: Not Supported 00:52:30.346 UUID List: Not Supported 00:52:30.346 Multi-Domain Subsystem: Not Supported 00:52:30.346 Fixed Capacity Management: Not Supported 00:52:30.346 Variable Capacity Management: Not Supported 00:52:30.346 Delete Endurance Group: Not Supported 00:52:30.346 Delete NVM Set: Not Supported 00:52:30.346 Extended LBA Formats Supported: Not Supported 00:52:30.346 Flexible Data Placement Supported: Not Supported 00:52:30.346 00:52:30.346 Controller Memory Buffer Support 00:52:30.346 ================================ 00:52:30.346 Supported: No 00:52:30.346 00:52:30.346 Persistent Memory Region Support 00:52:30.346 ================================ 00:52:30.346 Supported: No 00:52:30.346 00:52:30.346 Admin Command Set Attributes 00:52:30.346 ============================ 00:52:30.346 Security Send/Receive: Not Supported 00:52:30.346 Format NVM: Not Supported 00:52:30.346 Firmware Activate/Download: Not Supported 00:52:30.346 Namespace Management: Not Supported 00:52:30.346 Device Self-Test: Not Supported 00:52:30.346 Directives: Not Supported 00:52:30.346 NVMe-MI: Not Supported 00:52:30.346 Virtualization Management: Not Supported 00:52:30.346 Doorbell Buffer Config: Not Supported 00:52:30.346 Get LBA Status Capability: Not Supported 00:52:30.346 Command & Feature Lockdown Capability: Not Supported 00:52:30.346 Abort Command Limit: 1 00:52:30.346 Async Event Request Limit: 4 00:52:30.346 Number of Firmware Slots: N/A 00:52:30.346 Firmware Slot 1 Read-Only: N/A 00:52:30.346 Firmware Activation Without Reset: N/A 00:52:30.346 Multiple Update Detection Support: N/A 00:52:30.346 Firmware Update Granularity: No Information Provided 00:52:30.346 Per-Namespace SMART Log: No 00:52:30.346 Asymmetric Namespace Access Log Page: Not Supported 00:52:30.346 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:52:30.346 Command Effects Log Page: Not Supported 00:52:30.346 Get Log Page Extended Data: Supported 00:52:30.346 Telemetry Log Pages: Not Supported 00:52:30.346 Persistent Event Log Pages: Not Supported 00:52:30.346 Supported Log Pages Log Page: May Support 00:52:30.346 Commands Supported & Effects Log Page: Not Supported 00:52:30.346 Feature Identifiers & Effects Log Page:May Support 00:52:30.346 NVMe-MI Commands & Effects Log Page: May Support 00:52:30.346 Data Area 4 for Telemetry Log: Not Supported 00:52:30.346 Error Log Page Entries Supported: 128 00:52:30.346 Keep Alive: Not Supported 00:52:30.346 00:52:30.346 NVM Command Set Attributes 00:52:30.346 ========================== 00:52:30.346 Submission Queue Entry Size 00:52:30.346 Max: 1 00:52:30.346 Min: 1 00:52:30.346 Completion Queue Entry Size 00:52:30.346 Max: 1 00:52:30.346 Min: 1 00:52:30.346 Number of Namespaces: 0 00:52:30.346 Compare Command: Not Supported 00:52:30.346 Write Uncorrectable Command: Not Supported 00:52:30.346 Dataset Management Command: Not Supported 00:52:30.346 Write Zeroes Command: Not Supported 00:52:30.346 Set Features Save Field: Not Supported 00:52:30.346 Reservations: Not Supported 00:52:30.346 Timestamp: Not Supported 00:52:30.346 Copy: Not Supported 00:52:30.346 Volatile Write Cache: Not Present 00:52:30.346 Atomic Write Unit (Normal): 1 00:52:30.346 Atomic Write Unit (PFail): 1 00:52:30.346 Atomic Compare & Write Unit: 1 00:52:30.346 Fused Compare & Write: Supported 00:52:30.346 Scatter-Gather List 00:52:30.346 SGL Command Set: Supported 00:52:30.346 SGL Keyed: Supported 00:52:30.346 SGL Bit Bucket Descriptor: Not Supported 00:52:30.346 SGL Metadata Pointer: Not Supported 00:52:30.346 Oversized SGL: Not Supported 00:52:30.346 SGL Metadata Address: Not Supported 00:52:30.346 SGL Offset: Supported 00:52:30.346 Transport SGL Data Block: Not Supported 00:52:30.346 Replay Protected Memory Block: Not Supported 00:52:30.346 00:52:30.346 Firmware Slot Information 00:52:30.346 ========================= 00:52:30.346 Active slot: 0 00:52:30.346 00:52:30.346 00:52:30.346 Error Log 00:52:30.346 ========= 00:52:30.346 00:52:30.346 Active Namespaces 00:52:30.346 ================= 00:52:30.346 Discovery Log Page 00:52:30.346 ================== 00:52:30.346 Generation Counter: 2 00:52:30.346 Number of Records: 2 00:52:30.346 Record Format: 0 00:52:30.346 00:52:30.346 Discovery Log Entry 0 00:52:30.346 ---------------------- 00:52:30.346 Transport Type: 3 (TCP) 00:52:30.346 Address Family: 1 (IPv4) 00:52:30.346 Subsystem Type: 3 (Current Discovery Subsystem) 00:52:30.346 Entry Flags: 00:52:30.346 Duplicate Returned Information: 1 00:52:30.346 Explicit Persistent Connection Support for Discovery: 1 00:52:30.346 Transport Requirements: 00:52:30.346 Secure Channel: Not Required 00:52:30.346 Port ID: 0 (0x0000) 00:52:30.346 Controller ID: 65535 (0xffff) 00:52:30.346 Admin Max SQ Size: 128 00:52:30.346 Transport Service Identifier: 4420 00:52:30.346 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:52:30.346 Transport Address: 10.0.0.2 00:52:30.346 Discovery Log Entry 1 00:52:30.346 ---------------------- 00:52:30.346 Transport Type: 3 (TCP) 00:52:30.346 Address Family: 1 (IPv4) 00:52:30.346 Subsystem Type: 2 (NVM Subsystem) 00:52:30.346 Entry Flags: 00:52:30.346 Duplicate Returned Information: 0 00:52:30.346 Explicit Persistent Connection Support for Discovery: 0 00:52:30.346 Transport Requirements: 00:52:30.346 Secure Channel: Not Required 00:52:30.346 Port ID: 0 (0x0000) 00:52:30.346 Controller ID: 65535 (0xffff) 00:52:30.346 Admin Max SQ Size: 128 00:52:30.346 Transport Service Identifier: 4420 00:52:30.346 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:52:30.346 Transport Address: 10.0.0.2 [2024-06-11 03:49:11.490299] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:52:30.346 [2024-06-11 03:49:11.490311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:30.346 [2024-06-11 03:49:11.490317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:30.346 [2024-06-11 03:49:11.490323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:30.346 [2024-06-11 03:49:11.490328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:30.346 [2024-06-11 03:49:11.490338] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.346 [2024-06-11 03:49:11.490341] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.346 [2024-06-11 03:49:11.490346] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.346 [2024-06-11 03:49:11.490353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.346 [2024-06-11 03:49:11.490366] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.346 [2024-06-11 03:49:11.490472] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.346 [2024-06-11 03:49:11.490477] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.346 [2024-06-11 03:49:11.490480] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.346 [2024-06-11 03:49:11.490484] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.346 [2024-06-11 03:49:11.490490] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.346 [2024-06-11 03:49:11.490493] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.346 [2024-06-11 03:49:11.490496] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.346 [2024-06-11 03:49:11.490502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.346 [2024-06-11 03:49:11.490514] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.346 [2024-06-11 03:49:11.490613] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.346 [2024-06-11 03:49:11.490618] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.346 [2024-06-11 03:49:11.490621] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.346 [2024-06-11 03:49:11.490624] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.346 [2024-06-11 03:49:11.490628] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:52:30.346 [2024-06-11 03:49:11.490632] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:52:30.346 [2024-06-11 03:49:11.490640] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.346 [2024-06-11 03:49:11.490643] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.346 [2024-06-11 03:49:11.490647] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.346 [2024-06-11 03:49:11.490652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.346 [2024-06-11 03:49:11.490661] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.347 [2024-06-11 03:49:11.490733] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.347 [2024-06-11 03:49:11.490738] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.347 [2024-06-11 03:49:11.490741] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.490744] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.347 [2024-06-11 03:49:11.490753] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.490756] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.490759] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.347 [2024-06-11 03:49:11.490765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.347 [2024-06-11 03:49:11.490773] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.347 [2024-06-11 03:49:11.490843] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.347 [2024-06-11 03:49:11.490848] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.347 [2024-06-11 03:49:11.490851] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.490854] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.347 [2024-06-11 03:49:11.490864] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.490868] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.490871] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.347 [2024-06-11 03:49:11.490876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.347 [2024-06-11 03:49:11.490885] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.347 [2024-06-11 03:49:11.490955] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.347 [2024-06-11 03:49:11.490960] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.347 [2024-06-11 03:49:11.490963] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.490967] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.347 [2024-06-11 03:49:11.490974] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.490978] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.490981] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.347 [2024-06-11 03:49:11.490986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.347 [2024-06-11 03:49:11.490995] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.347 [2024-06-11 03:49:11.491075] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.347 [2024-06-11 03:49:11.491081] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.347 [2024-06-11 03:49:11.491084] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491087] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.347 [2024-06-11 03:49:11.491095] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491108] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491111] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.347 [2024-06-11 03:49:11.491117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.347 [2024-06-11 03:49:11.491128] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.347 [2024-06-11 03:49:11.491201] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.347 [2024-06-11 03:49:11.491207] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.347 [2024-06-11 03:49:11.491210] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491213] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.347 [2024-06-11 03:49:11.491221] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491224] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491227] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.347 [2024-06-11 03:49:11.491233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.347 [2024-06-11 03:49:11.491242] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.347 [2024-06-11 03:49:11.491314] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.347 [2024-06-11 03:49:11.491320] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.347 [2024-06-11 03:49:11.491323] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491326] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.347 [2024-06-11 03:49:11.491334] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491339] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491342] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.347 [2024-06-11 03:49:11.491348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.347 [2024-06-11 03:49:11.491357] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.347 [2024-06-11 03:49:11.491427] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.347 [2024-06-11 03:49:11.491433] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.347 [2024-06-11 03:49:11.491436] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491439] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.347 [2024-06-11 03:49:11.491447] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491450] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491453] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.347 [2024-06-11 03:49:11.491459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.347 [2024-06-11 03:49:11.491468] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.347 [2024-06-11 03:49:11.491541] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.347 [2024-06-11 03:49:11.491546] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.347 [2024-06-11 03:49:11.491549] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491552] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.347 [2024-06-11 03:49:11.491560] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491563] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491566] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.347 [2024-06-11 03:49:11.491572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.347 [2024-06-11 03:49:11.491580] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.347 [2024-06-11 03:49:11.491653] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.347 [2024-06-11 03:49:11.491658] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.347 [2024-06-11 03:49:11.491661] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491664] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.347 [2024-06-11 03:49:11.491672] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491675] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491678] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.347 [2024-06-11 03:49:11.491684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.347 [2024-06-11 03:49:11.491693] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.347 [2024-06-11 03:49:11.491762] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.347 [2024-06-11 03:49:11.491768] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.347 [2024-06-11 03:49:11.491770] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491774] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.347 [2024-06-11 03:49:11.491781] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491785] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491789] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.347 [2024-06-11 03:49:11.491795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.347 [2024-06-11 03:49:11.491804] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.347 [2024-06-11 03:49:11.491873] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.347 [2024-06-11 03:49:11.491879] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.347 [2024-06-11 03:49:11.491882] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491885] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.347 [2024-06-11 03:49:11.491893] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491896] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491899] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.347 [2024-06-11 03:49:11.491904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.347 [2024-06-11 03:49:11.491913] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.347 [2024-06-11 03:49:11.491988] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.347 [2024-06-11 03:49:11.491993] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.347 [2024-06-11 03:49:11.491996] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.491999] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.347 [2024-06-11 03:49:11.492007] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.492016] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.347 [2024-06-11 03:49:11.492019] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.347 [2024-06-11 03:49:11.492025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.348 [2024-06-11 03:49:11.492034] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.348 [2024-06-11 03:49:11.492105] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.348 [2024-06-11 03:49:11.492111] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.348 [2024-06-11 03:49:11.492114] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492117] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.348 [2024-06-11 03:49:11.492125] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492128] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492131] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.348 [2024-06-11 03:49:11.492137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.348 [2024-06-11 03:49:11.492146] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.348 [2024-06-11 03:49:11.492216] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.348 [2024-06-11 03:49:11.492221] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.348 [2024-06-11 03:49:11.492224] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492227] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.348 [2024-06-11 03:49:11.492235] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492238] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492241] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.348 [2024-06-11 03:49:11.492251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.348 [2024-06-11 03:49:11.492260] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.348 [2024-06-11 03:49:11.492330] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.348 [2024-06-11 03:49:11.492336] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.348 [2024-06-11 03:49:11.492339] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492342] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.348 [2024-06-11 03:49:11.492350] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492353] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492356] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.348 [2024-06-11 03:49:11.492362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.348 [2024-06-11 03:49:11.492370] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.348 [2024-06-11 03:49:11.492440] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.348 [2024-06-11 03:49:11.492445] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.348 [2024-06-11 03:49:11.492448] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492451] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.348 [2024-06-11 03:49:11.492459] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492463] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492466] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.348 [2024-06-11 03:49:11.492471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.348 [2024-06-11 03:49:11.492480] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.348 [2024-06-11 03:49:11.492549] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.348 [2024-06-11 03:49:11.492555] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.348 [2024-06-11 03:49:11.492558] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492561] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.348 [2024-06-11 03:49:11.492569] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492572] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492575] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.348 [2024-06-11 03:49:11.492581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.348 [2024-06-11 03:49:11.492589] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.348 [2024-06-11 03:49:11.492660] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.348 [2024-06-11 03:49:11.492665] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.348 [2024-06-11 03:49:11.492668] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492671] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.348 [2024-06-11 03:49:11.492679] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492682] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492685] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.348 [2024-06-11 03:49:11.492691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.348 [2024-06-11 03:49:11.492701] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.348 [2024-06-11 03:49:11.492776] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.348 [2024-06-11 03:49:11.492781] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.348 [2024-06-11 03:49:11.492784] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492787] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.348 [2024-06-11 03:49:11.492795] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492798] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492801] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.348 [2024-06-11 03:49:11.492807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.348 [2024-06-11 03:49:11.492816] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.348 [2024-06-11 03:49:11.492886] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.348 [2024-06-11 03:49:11.492891] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.348 [2024-06-11 03:49:11.492894] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492897] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.348 [2024-06-11 03:49:11.492905] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492909] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.492911] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.348 [2024-06-11 03:49:11.492917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.348 [2024-06-11 03:49:11.492926] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.348 [2024-06-11 03:49:11.492995] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.348 [2024-06-11 03:49:11.493001] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.348 [2024-06-11 03:49:11.493003] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.493007] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.348 [2024-06-11 03:49:11.497024] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.497029] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.497032] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaba990) 00:52:30.348 [2024-06-11 03:49:11.497038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.348 [2024-06-11 03:49:11.497049] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb146c0, cid 3, qid 0 00:52:30.348 [2024-06-11 03:49:11.497212] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.348 [2024-06-11 03:49:11.497218] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.348 [2024-06-11 03:49:11.497221] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.348 [2024-06-11 03:49:11.497224] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb146c0) on tqpair=0xaba990 00:52:30.348 [2024-06-11 03:49:11.497230] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:52:30.348 00:52:30.348 03:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:52:30.348 [2024-06-11 03:49:11.532450] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:52:30.348 [2024-06-11 03:49:11.532494] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2315828 ] 00:52:30.348 EAL: No free 2048 kB hugepages reported on node 1 00:52:30.348 [2024-06-11 03:49:11.562020] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:52:30.348 [2024-06-11 03:49:11.562056] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:52:30.348 [2024-06-11 03:49:11.562061] nvme_tcp.c:2337:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:52:30.348 [2024-06-11 03:49:11.562070] nvme_tcp.c:2355:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:52:30.348 [2024-06-11 03:49:11.562077] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:52:30.348 [2024-06-11 03:49:11.562418] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:52:30.348 [2024-06-11 03:49:11.562439] nvme_tcp.c:1550:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x801990 0 00:52:30.348 [2024-06-11 03:49:11.569021] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:52:30.348 [2024-06-11 03:49:11.569033] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:52:30.348 [2024-06-11 03:49:11.569036] nvme_tcp.c:1596:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:52:30.349 [2024-06-11 03:49:11.569039] nvme_tcp.c:1597:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:52:30.349 [2024-06-11 03:49:11.569067] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.569071] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.569075] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x801990) 00:52:30.349 [2024-06-11 03:49:11.569084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:52:30.349 [2024-06-11 03:49:11.569099] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b240, cid 0, qid 0 00:52:30.349 [2024-06-11 03:49:11.577019] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.349 [2024-06-11 03:49:11.577027] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.349 [2024-06-11 03:49:11.577030] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577033] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b240) on tqpair=0x801990 00:52:30.349 [2024-06-11 03:49:11.577041] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:52:30.349 [2024-06-11 03:49:11.577047] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:52:30.349 [2024-06-11 03:49:11.577051] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:52:30.349 [2024-06-11 03:49:11.577062] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577066] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577069] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x801990) 00:52:30.349 [2024-06-11 03:49:11.577076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.349 [2024-06-11 03:49:11.577088] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b240, cid 0, qid 0 00:52:30.349 [2024-06-11 03:49:11.577258] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.349 [2024-06-11 03:49:11.577264] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.349 [2024-06-11 03:49:11.577269] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577273] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b240) on tqpair=0x801990 00:52:30.349 [2024-06-11 03:49:11.577279] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:52:30.349 [2024-06-11 03:49:11.577285] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:52:30.349 [2024-06-11 03:49:11.577291] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577294] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577297] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x801990) 00:52:30.349 [2024-06-11 03:49:11.577303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.349 [2024-06-11 03:49:11.577312] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b240, cid 0, qid 0 00:52:30.349 [2024-06-11 03:49:11.577421] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.349 [2024-06-11 03:49:11.577426] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.349 [2024-06-11 03:49:11.577429] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577432] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b240) on tqpair=0x801990 00:52:30.349 [2024-06-11 03:49:11.577436] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:52:30.349 [2024-06-11 03:49:11.577443] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:52:30.349 [2024-06-11 03:49:11.577448] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577451] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577454] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x801990) 00:52:30.349 [2024-06-11 03:49:11.577460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.349 [2024-06-11 03:49:11.577469] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b240, cid 0, qid 0 00:52:30.349 [2024-06-11 03:49:11.577570] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.349 [2024-06-11 03:49:11.577576] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.349 [2024-06-11 03:49:11.577578] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577582] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b240) on tqpair=0x801990 00:52:30.349 [2024-06-11 03:49:11.577586] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:52:30.349 [2024-06-11 03:49:11.577593] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577597] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577600] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x801990) 00:52:30.349 [2024-06-11 03:49:11.577605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.349 [2024-06-11 03:49:11.577614] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b240, cid 0, qid 0 00:52:30.349 [2024-06-11 03:49:11.577722] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.349 [2024-06-11 03:49:11.577727] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.349 [2024-06-11 03:49:11.577730] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577733] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b240) on tqpair=0x801990 00:52:30.349 [2024-06-11 03:49:11.577736] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:52:30.349 [2024-06-11 03:49:11.577742] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:52:30.349 [2024-06-11 03:49:11.577748] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:52:30.349 [2024-06-11 03:49:11.577853] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:52:30.349 [2024-06-11 03:49:11.577856] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:52:30.349 [2024-06-11 03:49:11.577862] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577865] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577868] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x801990) 00:52:30.349 [2024-06-11 03:49:11.577874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.349 [2024-06-11 03:49:11.577883] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b240, cid 0, qid 0 00:52:30.349 [2024-06-11 03:49:11.577957] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.349 [2024-06-11 03:49:11.577963] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.349 [2024-06-11 03:49:11.577966] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577969] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b240) on tqpair=0x801990 00:52:30.349 [2024-06-11 03:49:11.577972] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:52:30.349 [2024-06-11 03:49:11.577980] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577983] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.577986] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x801990) 00:52:30.349 [2024-06-11 03:49:11.577992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.349 [2024-06-11 03:49:11.578001] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b240, cid 0, qid 0 00:52:30.349 [2024-06-11 03:49:11.578108] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.349 [2024-06-11 03:49:11.578114] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.349 [2024-06-11 03:49:11.578117] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.578120] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b240) on tqpair=0x801990 00:52:30.349 [2024-06-11 03:49:11.578124] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:52:30.349 [2024-06-11 03:49:11.578128] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:52:30.349 [2024-06-11 03:49:11.578134] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:52:30.349 [2024-06-11 03:49:11.578141] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:52:30.349 [2024-06-11 03:49:11.578147] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.578151] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x801990) 00:52:30.349 [2024-06-11 03:49:11.578156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.349 [2024-06-11 03:49:11.578166] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b240, cid 0, qid 0 00:52:30.349 [2024-06-11 03:49:11.578277] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:52:30.349 [2024-06-11 03:49:11.578283] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:52:30.349 [2024-06-11 03:49:11.578286] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.578289] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x801990): datao=0, datal=4096, cccid=0 00:52:30.349 [2024-06-11 03:49:11.578293] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x85b240) on tqpair(0x801990): expected_datao=0, payload_size=4096 00:52:30.349 [2024-06-11 03:49:11.578296] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.578302] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.578306] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.578361] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.349 [2024-06-11 03:49:11.578366] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.349 [2024-06-11 03:49:11.578369] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.349 [2024-06-11 03:49:11.578372] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b240) on tqpair=0x801990 00:52:30.349 [2024-06-11 03:49:11.578377] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:52:30.350 [2024-06-11 03:49:11.578381] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:52:30.350 [2024-06-11 03:49:11.578387] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:52:30.350 [2024-06-11 03:49:11.578391] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:52:30.350 [2024-06-11 03:49:11.578394] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:52:30.350 [2024-06-11 03:49:11.578398] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:52:30.350 [2024-06-11 03:49:11.578406] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:52:30.350 [2024-06-11 03:49:11.578412] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578415] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578418] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x801990) 00:52:30.350 [2024-06-11 03:49:11.578424] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:52:30.350 [2024-06-11 03:49:11.578434] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b240, cid 0, qid 0 00:52:30.350 [2024-06-11 03:49:11.578513] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.350 [2024-06-11 03:49:11.578518] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.350 [2024-06-11 03:49:11.578521] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578524] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b240) on tqpair=0x801990 00:52:30.350 [2024-06-11 03:49:11.578529] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578532] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578535] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x801990) 00:52:30.350 [2024-06-11 03:49:11.578540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:52:30.350 [2024-06-11 03:49:11.578545] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578548] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578551] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x801990) 00:52:30.350 [2024-06-11 03:49:11.578557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:52:30.350 [2024-06-11 03:49:11.578562] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578565] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578568] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x801990) 00:52:30.350 [2024-06-11 03:49:11.578573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:52:30.350 [2024-06-11 03:49:11.578578] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578581] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578583] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.350 [2024-06-11 03:49:11.578588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:52:30.350 [2024-06-11 03:49:11.578592] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:52:30.350 [2024-06-11 03:49:11.578601] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:52:30.350 [2024-06-11 03:49:11.578606] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578609] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x801990) 00:52:30.350 [2024-06-11 03:49:11.578614] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.350 [2024-06-11 03:49:11.578625] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b240, cid 0, qid 0 00:52:30.350 [2024-06-11 03:49:11.578630] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b3c0, cid 1, qid 0 00:52:30.350 [2024-06-11 03:49:11.578634] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b540, cid 2, qid 0 00:52:30.350 [2024-06-11 03:49:11.578637] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.350 [2024-06-11 03:49:11.578641] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b840, cid 4, qid 0 00:52:30.350 [2024-06-11 03:49:11.578768] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.350 [2024-06-11 03:49:11.578774] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.350 [2024-06-11 03:49:11.578776] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578779] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b840) on tqpair=0x801990 00:52:30.350 [2024-06-11 03:49:11.578784] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:52:30.350 [2024-06-11 03:49:11.578787] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:52:30.350 [2024-06-11 03:49:11.578794] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:52:30.350 [2024-06-11 03:49:11.578799] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:52:30.350 [2024-06-11 03:49:11.578804] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578807] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578810] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x801990) 00:52:30.350 [2024-06-11 03:49:11.578815] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:52:30.350 [2024-06-11 03:49:11.578824] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b840, cid 4, qid 0 00:52:30.350 [2024-06-11 03:49:11.578901] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.350 [2024-06-11 03:49:11.578906] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.350 [2024-06-11 03:49:11.578909] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578912] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b840) on tqpair=0x801990 00:52:30.350 [2024-06-11 03:49:11.578953] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:52:30.350 [2024-06-11 03:49:11.578962] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:52:30.350 [2024-06-11 03:49:11.578968] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.350 [2024-06-11 03:49:11.578971] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x801990) 00:52:30.350 [2024-06-11 03:49:11.578976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.350 [2024-06-11 03:49:11.578985] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b840, cid 4, qid 0 00:52:30.350 [2024-06-11 03:49:11.579084] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:52:30.351 [2024-06-11 03:49:11.579091] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:52:30.351 [2024-06-11 03:49:11.579093] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579096] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x801990): datao=0, datal=4096, cccid=4 00:52:30.351 [2024-06-11 03:49:11.579100] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x85b840) on tqpair(0x801990): expected_datao=0, payload_size=4096 00:52:30.351 [2024-06-11 03:49:11.579103] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579109] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579112] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579131] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.351 [2024-06-11 03:49:11.579136] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.351 [2024-06-11 03:49:11.579139] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579142] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b840) on tqpair=0x801990 00:52:30.351 [2024-06-11 03:49:11.579151] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:52:30.351 [2024-06-11 03:49:11.579164] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:52:30.351 [2024-06-11 03:49:11.579173] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:52:30.351 [2024-06-11 03:49:11.579178] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579181] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x801990) 00:52:30.351 [2024-06-11 03:49:11.579187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.351 [2024-06-11 03:49:11.579197] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b840, cid 4, qid 0 00:52:30.351 [2024-06-11 03:49:11.579281] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:52:30.351 [2024-06-11 03:49:11.579286] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:52:30.351 [2024-06-11 03:49:11.579289] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579292] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x801990): datao=0, datal=4096, cccid=4 00:52:30.351 [2024-06-11 03:49:11.579295] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x85b840) on tqpair(0x801990): expected_datao=0, payload_size=4096 00:52:30.351 [2024-06-11 03:49:11.579301] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579332] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579336] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579380] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.351 [2024-06-11 03:49:11.579385] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.351 [2024-06-11 03:49:11.579388] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579391] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b840) on tqpair=0x801990 00:52:30.351 [2024-06-11 03:49:11.579402] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:52:30.351 [2024-06-11 03:49:11.579410] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:52:30.351 [2024-06-11 03:49:11.579416] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579419] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x801990) 00:52:30.351 [2024-06-11 03:49:11.579424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.351 [2024-06-11 03:49:11.579434] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b840, cid 4, qid 0 00:52:30.351 [2024-06-11 03:49:11.579536] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:52:30.351 [2024-06-11 03:49:11.579541] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:52:30.351 [2024-06-11 03:49:11.579544] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579547] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x801990): datao=0, datal=4096, cccid=4 00:52:30.351 [2024-06-11 03:49:11.579551] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x85b840) on tqpair(0x801990): expected_datao=0, payload_size=4096 00:52:30.351 [2024-06-11 03:49:11.579554] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579559] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579562] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579625] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.351 [2024-06-11 03:49:11.579630] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.351 [2024-06-11 03:49:11.579633] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579636] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b840) on tqpair=0x801990 00:52:30.351 [2024-06-11 03:49:11.579642] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:52:30.351 [2024-06-11 03:49:11.579649] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:52:30.351 [2024-06-11 03:49:11.579656] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:52:30.351 [2024-06-11 03:49:11.579661] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:52:30.351 [2024-06-11 03:49:11.579665] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:52:30.351 [2024-06-11 03:49:11.579669] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:52:30.351 [2024-06-11 03:49:11.579673] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:52:30.351 [2024-06-11 03:49:11.579677] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:52:30.351 [2024-06-11 03:49:11.579692] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579695] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x801990) 00:52:30.351 [2024-06-11 03:49:11.579701] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.351 [2024-06-11 03:49:11.579706] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579709] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579712] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x801990) 00:52:30.351 [2024-06-11 03:49:11.579717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:52:30.351 [2024-06-11 03:49:11.579728] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b840, cid 4, qid 0 00:52:30.351 [2024-06-11 03:49:11.579732] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b9c0, cid 5, qid 0 00:52:30.351 [2024-06-11 03:49:11.579858] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.351 [2024-06-11 03:49:11.579864] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.351 [2024-06-11 03:49:11.579867] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579870] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b840) on tqpair=0x801990 00:52:30.351 [2024-06-11 03:49:11.579875] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.351 [2024-06-11 03:49:11.579880] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.351 [2024-06-11 03:49:11.579883] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579886] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b9c0) on tqpair=0x801990 00:52:30.351 [2024-06-11 03:49:11.579893] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.579896] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x801990) 00:52:30.351 [2024-06-11 03:49:11.579902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.351 [2024-06-11 03:49:11.579910] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b9c0, cid 5, qid 0 00:52:30.351 [2024-06-11 03:49:11.580007] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.351 [2024-06-11 03:49:11.580019] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.351 [2024-06-11 03:49:11.580022] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.580025] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b9c0) on tqpair=0x801990 00:52:30.351 [2024-06-11 03:49:11.580032] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.580035] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x801990) 00:52:30.351 [2024-06-11 03:49:11.580041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.351 [2024-06-11 03:49:11.580049] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b9c0, cid 5, qid 0 00:52:30.351 [2024-06-11 03:49:11.580160] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.351 [2024-06-11 03:49:11.580165] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.351 [2024-06-11 03:49:11.580168] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.580171] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b9c0) on tqpair=0x801990 00:52:30.351 [2024-06-11 03:49:11.580178] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.580182] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x801990) 00:52:30.351 [2024-06-11 03:49:11.580189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.351 [2024-06-11 03:49:11.580197] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b9c0, cid 5, qid 0 00:52:30.351 [2024-06-11 03:49:11.580267] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.351 [2024-06-11 03:49:11.580272] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.351 [2024-06-11 03:49:11.580275] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.580278] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b9c0) on tqpair=0x801990 00:52:30.351 [2024-06-11 03:49:11.580287] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.351 [2024-06-11 03:49:11.580291] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x801990) 00:52:30.352 [2024-06-11 03:49:11.580296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.352 [2024-06-11 03:49:11.580302] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580305] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x801990) 00:52:30.352 [2024-06-11 03:49:11.580310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.352 [2024-06-11 03:49:11.580315] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580318] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x801990) 00:52:30.352 [2024-06-11 03:49:11.580323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.352 [2024-06-11 03:49:11.580329] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580332] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x801990) 00:52:30.352 [2024-06-11 03:49:11.580337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.352 [2024-06-11 03:49:11.580347] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b9c0, cid 5, qid 0 00:52:30.352 [2024-06-11 03:49:11.580351] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b840, cid 4, qid 0 00:52:30.352 [2024-06-11 03:49:11.580355] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85bb40, cid 6, qid 0 00:52:30.352 [2024-06-11 03:49:11.580359] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85bcc0, cid 7, qid 0 00:52:30.352 [2024-06-11 03:49:11.580498] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:52:30.352 [2024-06-11 03:49:11.580503] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:52:30.352 [2024-06-11 03:49:11.580506] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580509] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x801990): datao=0, datal=8192, cccid=5 00:52:30.352 [2024-06-11 03:49:11.580513] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x85b9c0) on tqpair(0x801990): expected_datao=0, payload_size=8192 00:52:30.352 [2024-06-11 03:49:11.580516] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580621] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580624] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580629] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:52:30.352 [2024-06-11 03:49:11.580633] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:52:30.352 [2024-06-11 03:49:11.580636] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580639] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x801990): datao=0, datal=512, cccid=4 00:52:30.352 [2024-06-11 03:49:11.580645] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x85b840) on tqpair(0x801990): expected_datao=0, payload_size=512 00:52:30.352 [2024-06-11 03:49:11.580648] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580653] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580656] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580660] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:52:30.352 [2024-06-11 03:49:11.580665] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:52:30.352 [2024-06-11 03:49:11.580668] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580671] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x801990): datao=0, datal=512, cccid=6 00:52:30.352 [2024-06-11 03:49:11.580674] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x85bb40) on tqpair(0x801990): expected_datao=0, payload_size=512 00:52:30.352 [2024-06-11 03:49:11.580678] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580682] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580685] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580690] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:52:30.352 [2024-06-11 03:49:11.580694] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:52:30.352 [2024-06-11 03:49:11.580697] nvme_tcp.c:1714:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580700] nvme_tcp.c:1715:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x801990): datao=0, datal=4096, cccid=7 00:52:30.352 [2024-06-11 03:49:11.580704] nvme_tcp.c:1726:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x85bcc0) on tqpair(0x801990): expected_datao=0, payload_size=4096 00:52:30.352 [2024-06-11 03:49:11.580707] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580712] nvme_tcp.c:1516:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580715] nvme_tcp.c:1300:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580722] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.352 [2024-06-11 03:49:11.580727] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.352 [2024-06-11 03:49:11.580729] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580732] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b9c0) on tqpair=0x801990 00:52:30.352 [2024-06-11 03:49:11.580742] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.352 [2024-06-11 03:49:11.580747] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.352 [2024-06-11 03:49:11.580750] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580753] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b840) on tqpair=0x801990 00:52:30.352 [2024-06-11 03:49:11.580760] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.352 [2024-06-11 03:49:11.580764] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.352 [2024-06-11 03:49:11.580767] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580770] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85bb40) on tqpair=0x801990 00:52:30.352 [2024-06-11 03:49:11.580777] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.352 [2024-06-11 03:49:11.580782] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.352 [2024-06-11 03:49:11.580785] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.352 [2024-06-11 03:49:11.580788] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85bcc0) on tqpair=0x801990 00:52:30.352 ===================================================== 00:52:30.352 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:52:30.352 ===================================================== 00:52:30.352 Controller Capabilities/Features 00:52:30.352 ================================ 00:52:30.352 Vendor ID: 8086 00:52:30.352 Subsystem Vendor ID: 8086 00:52:30.352 Serial Number: SPDK00000000000001 00:52:30.352 Model Number: SPDK bdev Controller 00:52:30.352 Firmware Version: 24.09 00:52:30.352 Recommended Arb Burst: 6 00:52:30.352 IEEE OUI Identifier: e4 d2 5c 00:52:30.352 Multi-path I/O 00:52:30.352 May have multiple subsystem ports: Yes 00:52:30.352 May have multiple controllers: Yes 00:52:30.352 Associated with SR-IOV VF: No 00:52:30.352 Max Data Transfer Size: 131072 00:52:30.352 Max Number of Namespaces: 32 00:52:30.352 Max Number of I/O Queues: 127 00:52:30.352 NVMe Specification Version (VS): 1.3 00:52:30.352 NVMe Specification Version (Identify): 1.3 00:52:30.352 Maximum Queue Entries: 128 00:52:30.352 Contiguous Queues Required: Yes 00:52:30.352 Arbitration Mechanisms Supported 00:52:30.352 Weighted Round Robin: Not Supported 00:52:30.352 Vendor Specific: Not Supported 00:52:30.352 Reset Timeout: 15000 ms 00:52:30.352 Doorbell Stride: 4 bytes 00:52:30.352 NVM Subsystem Reset: Not Supported 00:52:30.352 Command Sets Supported 00:52:30.352 NVM Command Set: Supported 00:52:30.352 Boot Partition: Not Supported 00:52:30.352 Memory Page Size Minimum: 4096 bytes 00:52:30.352 Memory Page Size Maximum: 4096 bytes 00:52:30.352 Persistent Memory Region: Not Supported 00:52:30.352 Optional Asynchronous Events Supported 00:52:30.352 Namespace Attribute Notices: Supported 00:52:30.352 Firmware Activation Notices: Not Supported 00:52:30.352 ANA Change Notices: Not Supported 00:52:30.352 PLE Aggregate Log Change Notices: Not Supported 00:52:30.352 LBA Status Info Alert Notices: Not Supported 00:52:30.352 EGE Aggregate Log Change Notices: Not Supported 00:52:30.352 Normal NVM Subsystem Shutdown event: Not Supported 00:52:30.352 Zone Descriptor Change Notices: Not Supported 00:52:30.352 Discovery Log Change Notices: Not Supported 00:52:30.352 Controller Attributes 00:52:30.352 128-bit Host Identifier: Supported 00:52:30.352 Non-Operational Permissive Mode: Not Supported 00:52:30.352 NVM Sets: Not Supported 00:52:30.352 Read Recovery Levels: Not Supported 00:52:30.352 Endurance Groups: Not Supported 00:52:30.352 Predictable Latency Mode: Not Supported 00:52:30.352 Traffic Based Keep ALive: Not Supported 00:52:30.352 Namespace Granularity: Not Supported 00:52:30.352 SQ Associations: Not Supported 00:52:30.352 UUID List: Not Supported 00:52:30.352 Multi-Domain Subsystem: Not Supported 00:52:30.352 Fixed Capacity Management: Not Supported 00:52:30.352 Variable Capacity Management: Not Supported 00:52:30.352 Delete Endurance Group: Not Supported 00:52:30.352 Delete NVM Set: Not Supported 00:52:30.352 Extended LBA Formats Supported: Not Supported 00:52:30.352 Flexible Data Placement Supported: Not Supported 00:52:30.352 00:52:30.352 Controller Memory Buffer Support 00:52:30.352 ================================ 00:52:30.352 Supported: No 00:52:30.352 00:52:30.352 Persistent Memory Region Support 00:52:30.352 ================================ 00:52:30.352 Supported: No 00:52:30.352 00:52:30.352 Admin Command Set Attributes 00:52:30.352 ============================ 00:52:30.352 Security Send/Receive: Not Supported 00:52:30.352 Format NVM: Not Supported 00:52:30.352 Firmware Activate/Download: Not Supported 00:52:30.352 Namespace Management: Not Supported 00:52:30.352 Device Self-Test: Not Supported 00:52:30.353 Directives: Not Supported 00:52:30.353 NVMe-MI: Not Supported 00:52:30.353 Virtualization Management: Not Supported 00:52:30.353 Doorbell Buffer Config: Not Supported 00:52:30.353 Get LBA Status Capability: Not Supported 00:52:30.353 Command & Feature Lockdown Capability: Not Supported 00:52:30.353 Abort Command Limit: 4 00:52:30.353 Async Event Request Limit: 4 00:52:30.353 Number of Firmware Slots: N/A 00:52:30.353 Firmware Slot 1 Read-Only: N/A 00:52:30.353 Firmware Activation Without Reset: N/A 00:52:30.353 Multiple Update Detection Support: N/A 00:52:30.353 Firmware Update Granularity: No Information Provided 00:52:30.353 Per-Namespace SMART Log: No 00:52:30.353 Asymmetric Namespace Access Log Page: Not Supported 00:52:30.353 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:52:30.353 Command Effects Log Page: Supported 00:52:30.353 Get Log Page Extended Data: Supported 00:52:30.353 Telemetry Log Pages: Not Supported 00:52:30.353 Persistent Event Log Pages: Not Supported 00:52:30.353 Supported Log Pages Log Page: May Support 00:52:30.353 Commands Supported & Effects Log Page: Not Supported 00:52:30.353 Feature Identifiers & Effects Log Page:May Support 00:52:30.353 NVMe-MI Commands & Effects Log Page: May Support 00:52:30.353 Data Area 4 for Telemetry Log: Not Supported 00:52:30.353 Error Log Page Entries Supported: 128 00:52:30.353 Keep Alive: Supported 00:52:30.353 Keep Alive Granularity: 10000 ms 00:52:30.353 00:52:30.353 NVM Command Set Attributes 00:52:30.353 ========================== 00:52:30.353 Submission Queue Entry Size 00:52:30.353 Max: 64 00:52:30.353 Min: 64 00:52:30.353 Completion Queue Entry Size 00:52:30.353 Max: 16 00:52:30.353 Min: 16 00:52:30.353 Number of Namespaces: 32 00:52:30.353 Compare Command: Supported 00:52:30.353 Write Uncorrectable Command: Not Supported 00:52:30.353 Dataset Management Command: Supported 00:52:30.353 Write Zeroes Command: Supported 00:52:30.353 Set Features Save Field: Not Supported 00:52:30.353 Reservations: Supported 00:52:30.353 Timestamp: Not Supported 00:52:30.353 Copy: Supported 00:52:30.353 Volatile Write Cache: Present 00:52:30.353 Atomic Write Unit (Normal): 1 00:52:30.353 Atomic Write Unit (PFail): 1 00:52:30.353 Atomic Compare & Write Unit: 1 00:52:30.353 Fused Compare & Write: Supported 00:52:30.353 Scatter-Gather List 00:52:30.353 SGL Command Set: Supported 00:52:30.353 SGL Keyed: Supported 00:52:30.353 SGL Bit Bucket Descriptor: Not Supported 00:52:30.353 SGL Metadata Pointer: Not Supported 00:52:30.353 Oversized SGL: Not Supported 00:52:30.353 SGL Metadata Address: Not Supported 00:52:30.353 SGL Offset: Supported 00:52:30.353 Transport SGL Data Block: Not Supported 00:52:30.353 Replay Protected Memory Block: Not Supported 00:52:30.353 00:52:30.353 Firmware Slot Information 00:52:30.353 ========================= 00:52:30.353 Active slot: 1 00:52:30.353 Slot 1 Firmware Revision: 24.09 00:52:30.353 00:52:30.353 00:52:30.353 Commands Supported and Effects 00:52:30.353 ============================== 00:52:30.353 Admin Commands 00:52:30.353 -------------- 00:52:30.353 Get Log Page (02h): Supported 00:52:30.353 Identify (06h): Supported 00:52:30.353 Abort (08h): Supported 00:52:30.353 Set Features (09h): Supported 00:52:30.353 Get Features (0Ah): Supported 00:52:30.353 Asynchronous Event Request (0Ch): Supported 00:52:30.353 Keep Alive (18h): Supported 00:52:30.353 I/O Commands 00:52:30.353 ------------ 00:52:30.353 Flush (00h): Supported LBA-Change 00:52:30.353 Write (01h): Supported LBA-Change 00:52:30.353 Read (02h): Supported 00:52:30.353 Compare (05h): Supported 00:52:30.353 Write Zeroes (08h): Supported LBA-Change 00:52:30.353 Dataset Management (09h): Supported LBA-Change 00:52:30.353 Copy (19h): Supported LBA-Change 00:52:30.353 Unknown (79h): Supported LBA-Change 00:52:30.353 Unknown (7Ah): Supported 00:52:30.353 00:52:30.353 Error Log 00:52:30.353 ========= 00:52:30.353 00:52:30.353 Arbitration 00:52:30.353 =========== 00:52:30.353 Arbitration Burst: 1 00:52:30.353 00:52:30.353 Power Management 00:52:30.353 ================ 00:52:30.353 Number of Power States: 1 00:52:30.353 Current Power State: Power State #0 00:52:30.353 Power State #0: 00:52:30.353 Max Power: 0.00 W 00:52:30.353 Non-Operational State: Operational 00:52:30.353 Entry Latency: Not Reported 00:52:30.353 Exit Latency: Not Reported 00:52:30.353 Relative Read Throughput: 0 00:52:30.353 Relative Read Latency: 0 00:52:30.353 Relative Write Throughput: 0 00:52:30.353 Relative Write Latency: 0 00:52:30.353 Idle Power: Not Reported 00:52:30.353 Active Power: Not Reported 00:52:30.353 Non-Operational Permissive Mode: Not Supported 00:52:30.353 00:52:30.353 Health Information 00:52:30.353 ================== 00:52:30.353 Critical Warnings: 00:52:30.353 Available Spare Space: OK 00:52:30.353 Temperature: OK 00:52:30.353 Device Reliability: OK 00:52:30.353 Read Only: No 00:52:30.353 Volatile Memory Backup: OK 00:52:30.353 Current Temperature: 0 Kelvin (-273 Celsius) 00:52:30.353 Temperature Threshold: [2024-06-11 03:49:11.580868] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.353 [2024-06-11 03:49:11.580873] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x801990) 00:52:30.353 [2024-06-11 03:49:11.580879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.353 [2024-06-11 03:49:11.580890] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85bcc0, cid 7, qid 0 00:52:30.353 [2024-06-11 03:49:11.580996] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.353 [2024-06-11 03:49:11.581002] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.353 [2024-06-11 03:49:11.581004] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.353 [2024-06-11 03:49:11.581007] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85bcc0) on tqpair=0x801990 00:52:30.353 [2024-06-11 03:49:11.585041] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:52:30.353 [2024-06-11 03:49:11.585052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:30.353 [2024-06-11 03:49:11.585058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:30.353 [2024-06-11 03:49:11.585062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:30.353 [2024-06-11 03:49:11.585067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:30.353 [2024-06-11 03:49:11.585074] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.353 [2024-06-11 03:49:11.585077] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.353 [2024-06-11 03:49:11.585080] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.353 [2024-06-11 03:49:11.585086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.353 [2024-06-11 03:49:11.585097] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.353 [2024-06-11 03:49:11.585294] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.353 [2024-06-11 03:49:11.585299] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.353 [2024-06-11 03:49:11.585302] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.353 [2024-06-11 03:49:11.585305] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.353 [2024-06-11 03:49:11.585311] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.353 [2024-06-11 03:49:11.585315] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.353 [2024-06-11 03:49:11.585317] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.353 [2024-06-11 03:49:11.585323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.353 [2024-06-11 03:49:11.585335] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.353 [2024-06-11 03:49:11.585441] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.353 [2024-06-11 03:49:11.585446] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.353 [2024-06-11 03:49:11.585449] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.353 [2024-06-11 03:49:11.585452] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.353 [2024-06-11 03:49:11.585456] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:52:30.353 [2024-06-11 03:49:11.585460] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:52:30.353 [2024-06-11 03:49:11.585467] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.353 [2024-06-11 03:49:11.585470] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.353 [2024-06-11 03:49:11.585473] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.353 [2024-06-11 03:49:11.585478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.353 [2024-06-11 03:49:11.585490] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.353 [2024-06-11 03:49:11.585592] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.353 [2024-06-11 03:49:11.585597] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.353 [2024-06-11 03:49:11.585600] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.353 [2024-06-11 03:49:11.585603] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.353 [2024-06-11 03:49:11.585611] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.585614] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.585617] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.354 [2024-06-11 03:49:11.585623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.354 [2024-06-11 03:49:11.585632] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.354 [2024-06-11 03:49:11.585704] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.354 [2024-06-11 03:49:11.585709] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.354 [2024-06-11 03:49:11.585712] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.585715] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.354 [2024-06-11 03:49:11.585723] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.585726] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.585729] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.354 [2024-06-11 03:49:11.585734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.354 [2024-06-11 03:49:11.585743] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.354 [2024-06-11 03:49:11.585844] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.354 [2024-06-11 03:49:11.585849] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.354 [2024-06-11 03:49:11.585852] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.585855] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.354 [2024-06-11 03:49:11.585863] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.585866] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.585869] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.354 [2024-06-11 03:49:11.585874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.354 [2024-06-11 03:49:11.585883] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.354 [2024-06-11 03:49:11.585996] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.354 [2024-06-11 03:49:11.586001] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.354 [2024-06-11 03:49:11.586004] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586007] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.354 [2024-06-11 03:49:11.586021] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586024] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586027] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.354 [2024-06-11 03:49:11.586033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.354 [2024-06-11 03:49:11.586043] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.354 [2024-06-11 03:49:11.586147] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.354 [2024-06-11 03:49:11.586152] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.354 [2024-06-11 03:49:11.586155] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586158] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.354 [2024-06-11 03:49:11.586166] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586169] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586172] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.354 [2024-06-11 03:49:11.586177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.354 [2024-06-11 03:49:11.586186] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.354 [2024-06-11 03:49:11.586255] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.354 [2024-06-11 03:49:11.586260] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.354 [2024-06-11 03:49:11.586263] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586266] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.354 [2024-06-11 03:49:11.586274] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586277] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586280] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.354 [2024-06-11 03:49:11.586285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.354 [2024-06-11 03:49:11.586294] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.354 [2024-06-11 03:49:11.586400] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.354 [2024-06-11 03:49:11.586405] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.354 [2024-06-11 03:49:11.586408] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586411] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.354 [2024-06-11 03:49:11.586418] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586421] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586424] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.354 [2024-06-11 03:49:11.586430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.354 [2024-06-11 03:49:11.586438] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.354 [2024-06-11 03:49:11.586550] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.354 [2024-06-11 03:49:11.586555] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.354 [2024-06-11 03:49:11.586558] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586561] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.354 [2024-06-11 03:49:11.586569] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586572] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586575] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.354 [2024-06-11 03:49:11.586580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.354 [2024-06-11 03:49:11.586589] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.354 [2024-06-11 03:49:11.586701] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.354 [2024-06-11 03:49:11.586706] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.354 [2024-06-11 03:49:11.586709] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586712] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.354 [2024-06-11 03:49:11.586720] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586723] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586726] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.354 [2024-06-11 03:49:11.586731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.354 [2024-06-11 03:49:11.586740] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.354 [2024-06-11 03:49:11.586817] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.354 [2024-06-11 03:49:11.586822] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.354 [2024-06-11 03:49:11.586824] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586828] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.354 [2024-06-11 03:49:11.586835] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586838] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586841] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.354 [2024-06-11 03:49:11.586847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.354 [2024-06-11 03:49:11.586855] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.354 [2024-06-11 03:49:11.586952] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.354 [2024-06-11 03:49:11.586957] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.354 [2024-06-11 03:49:11.586960] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586963] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.354 [2024-06-11 03:49:11.586971] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586974] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.586977] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.354 [2024-06-11 03:49:11.586982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.354 [2024-06-11 03:49:11.586991] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.354 [2024-06-11 03:49:11.587105] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.354 [2024-06-11 03:49:11.587110] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.354 [2024-06-11 03:49:11.587113] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.587116] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.354 [2024-06-11 03:49:11.587124] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.587127] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.354 [2024-06-11 03:49:11.587130] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.354 [2024-06-11 03:49:11.587136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.354 [2024-06-11 03:49:11.587145] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.354 [2024-06-11 03:49:11.587257] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.354 [2024-06-11 03:49:11.587264] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.354 [2024-06-11 03:49:11.587267] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.355 [2024-06-11 03:49:11.587270] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.355 [2024-06-11 03:49:11.587277] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.355 [2024-06-11 03:49:11.587280] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.355 [2024-06-11 03:49:11.587283] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.355 [2024-06-11 03:49:11.587289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.355 [2024-06-11 03:49:11.587297] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.355 [2024-06-11 03:49:11.587369] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.355 [2024-06-11 03:49:11.587374] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.355 [2024-06-11 03:49:11.587377] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.355 [2024-06-11 03:49:11.587380] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.355 [2024-06-11 03:49:11.587388] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.355 [2024-06-11 03:49:11.587391] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.355 [2024-06-11 03:49:11.587394] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.355 [2024-06-11 03:49:11.587399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.355 [2024-06-11 03:49:11.587408] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.355 [2024-06-11 03:49:11.587508] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.355 [2024-06-11 03:49:11.587513] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.355 [2024-06-11 03:49:11.587516] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.355 [2024-06-11 03:49:11.587519] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.355 [2024-06-11 03:49:11.587526] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.355 [2024-06-11 03:49:11.587530] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.355 [2024-06-11 03:49:11.587532] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.356 [2024-06-11 03:49:11.587538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.356 [2024-06-11 03:49:11.587546] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.356 [2024-06-11 03:49:11.591017] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.356 [2024-06-11 03:49:11.591026] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.356 [2024-06-11 03:49:11.591030] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.356 [2024-06-11 03:49:11.591033] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.356 [2024-06-11 03:49:11.591042] nvme_tcp.c: 771:nvme_tcp_build_contig_request: *DEBUG*: enter 00:52:30.356 [2024-06-11 03:49:11.591046] nvme_tcp.c: 954:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:52:30.356 [2024-06-11 03:49:11.591048] nvme_tcp.c: 963:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x801990) 00:52:30.356 [2024-06-11 03:49:11.591054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:30.356 [2024-06-11 03:49:11.591065] nvme_tcp.c: 928:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x85b6c0, cid 3, qid 0 00:52:30.356 [2024-06-11 03:49:11.591262] nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:52:30.356 [2024-06-11 03:49:11.591267] nvme_tcp.c:1970:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:52:30.356 [2024-06-11 03:49:11.591272] nvme_tcp.c:1643:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:52:30.356 [2024-06-11 03:49:11.591276] nvme_tcp.c: 913:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x85b6c0) on tqpair=0x801990 00:52:30.356 [2024-06-11 03:49:11.591282] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:52:30.356 0 Kelvin (-273 Celsius) 00:52:30.356 Available Spare: 0% 00:52:30.356 Available Spare Threshold: 0% 00:52:30.356 Life Percentage Used: 0% 00:52:30.356 Data Units Read: 0 00:52:30.356 Data Units Written: 0 00:52:30.356 Host Read Commands: 0 00:52:30.356 Host Write Commands: 0 00:52:30.356 Controller Busy Time: 0 minutes 00:52:30.356 Power Cycles: 0 00:52:30.356 Power On Hours: 0 hours 00:52:30.356 Unsafe Shutdowns: 0 00:52:30.356 Unrecoverable Media Errors: 0 00:52:30.356 Lifetime Error Log Entries: 0 00:52:30.356 Warning Temperature Time: 0 minutes 00:52:30.356 Critical Temperature Time: 0 minutes 00:52:30.356 00:52:30.356 Number of Queues 00:52:30.356 ================ 00:52:30.356 Number of I/O Submission Queues: 127 00:52:30.356 Number of I/O Completion Queues: 127 00:52:30.356 00:52:30.356 Active Namespaces 00:52:30.356 ================= 00:52:30.356 Namespace ID:1 00:52:30.356 Error Recovery Timeout: Unlimited 00:52:30.356 Command Set Identifier: NVM (00h) 00:52:30.356 Deallocate: Supported 00:52:30.356 Deallocated/Unwritten Error: Not Supported 00:52:30.356 Deallocated Read Value: Unknown 00:52:30.356 Deallocate in Write Zeroes: Not Supported 00:52:30.356 Deallocated Guard Field: 0xFFFF 00:52:30.356 Flush: Supported 00:52:30.356 Reservation: Supported 00:52:30.356 Namespace Sharing Capabilities: Multiple Controllers 00:52:30.356 Size (in LBAs): 131072 (0GiB) 00:52:30.356 Capacity (in LBAs): 131072 (0GiB) 00:52:30.356 Utilization (in LBAs): 131072 (0GiB) 00:52:30.356 NGUID: ABCDEF0123456789ABCDEF0123456789 00:52:30.356 EUI64: ABCDEF0123456789 00:52:30.356 UUID: 4a6844c5-5ab7-4473-af08-01fd9a427e1d 00:52:30.356 Thin Provisioning: Not Supported 00:52:30.356 Per-NS Atomic Units: Yes 00:52:30.356 Atomic Boundary Size (Normal): 0 00:52:30.356 Atomic Boundary Size (PFail): 0 00:52:30.356 Atomic Boundary Offset: 0 00:52:30.356 Maximum Single Source Range Length: 65535 00:52:30.356 Maximum Copy Length: 65535 00:52:30.356 Maximum Source Range Count: 1 00:52:30.356 NGUID/EUI64 Never Reused: No 00:52:30.356 Namespace Write Protected: No 00:52:30.356 Number of LBA Formats: 1 00:52:30.356 Current LBA Format: LBA Format #00 00:52:30.356 LBA Format #00: Data Size: 512 Metadata Size: 0 00:52:30.356 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:52:30.356 rmmod nvme_tcp 00:52:30.356 rmmod nvme_fabrics 00:52:30.356 rmmod nvme_keyring 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2315577 ']' 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2315577 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@949 -- # '[' -z 2315577 ']' 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # kill -0 2315577 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # uname 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2315577 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2315577' 00:52:30.356 killing process with pid 2315577 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@968 -- # kill 2315577 00:52:30.356 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@973 -- # wait 2315577 00:52:30.614 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:52:30.614 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:52:30.614 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:52:30.614 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:52:30.614 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:52:30.614 03:49:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:30.614 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:52:30.614 03:49:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:33.145 03:49:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:52:33.145 00:52:33.145 real 0m9.538s 00:52:33.145 user 0m7.224s 00:52:33.145 sys 0m4.752s 00:52:33.145 03:49:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:52:33.145 03:49:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:52:33.145 ************************************ 00:52:33.145 END TEST nvmf_identify 00:52:33.145 ************************************ 00:52:33.145 03:49:14 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:52:33.145 03:49:14 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:52:33.145 03:49:14 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:52:33.145 03:49:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:52:33.145 ************************************ 00:52:33.145 START TEST nvmf_perf 00:52:33.145 ************************************ 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:52:33.145 * Looking for test storage... 00:52:33.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:52:33.145 03:49:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:52:39.717 Found 0000:86:00.0 (0x8086 - 0x159b) 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:52:39.717 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:52:39.718 Found 0000:86:00.1 (0x8086 - 0x159b) 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:52:39.718 Found net devices under 0000:86:00.0: cvl_0_0 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:52:39.718 Found net devices under 0000:86:00.1: cvl_0_1 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:52:39.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:52:39.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:52:39.718 00:52:39.718 --- 10.0.0.2 ping statistics --- 00:52:39.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:39.718 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:52:39.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:52:39.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:52:39.718 00:52:39.718 --- 10.0.0.1 ping statistics --- 00:52:39.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:39.718 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@723 -- # xtrace_disable 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2319631 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2319631 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # '[' -z 2319631 ']' 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # local max_retries=100 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:39.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@839 -- # xtrace_disable 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:52:39.718 [2024-06-11 03:49:20.455846] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:52:39.718 [2024-06-11 03:49:20.455884] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:52:39.718 EAL: No free 2048 kB hugepages reported on node 1 00:52:39.718 [2024-06-11 03:49:20.518541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:52:39.718 [2024-06-11 03:49:20.561367] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:52:39.718 [2024-06-11 03:49:20.561399] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:52:39.718 [2024-06-11 03:49:20.561408] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:52:39.718 [2024-06-11 03:49:20.561420] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:52:39.718 [2024-06-11 03:49:20.561426] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:52:39.718 [2024-06-11 03:49:20.561463] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:52:39.718 [2024-06-11 03:49:20.561560] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:52:39.718 [2024-06-11 03:49:20.561665] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:52:39.718 [2024-06-11 03:49:20.561666] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@863 -- # return 0 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@729 -- # xtrace_disable 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:52:39.718 03:49:20 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:52:43.006 03:49:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:52:43.006 03:49:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:52:43.006 03:49:23 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5f:00.0 00:52:43.006 03:49:23 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:52:43.006 03:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:52:43.006 03:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5f:00.0 ']' 00:52:43.006 03:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:52:43.006 03:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:52:43.006 03:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:52:43.006 [2024-06-11 03:49:24.253238] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:43.006 03:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:52:43.263 03:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:52:43.263 03:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:52:43.263 03:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:52:43.263 03:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:52:43.520 03:49:24 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:52:43.779 [2024-06-11 03:49:24.974113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:52:43.779 03:49:25 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:52:44.092 03:49:25 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5f:00.0 ']' 00:52:44.092 03:49:25 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:52:44.092 03:49:25 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:52:44.092 03:49:25 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:52:45.027 Initializing NVMe Controllers 00:52:45.027 Attached to NVMe Controller at 0000:5f:00.0 [8086:0a54] 00:52:45.027 Associating PCIE (0000:5f:00.0) NSID 1 with lcore 0 00:52:45.027 Initialization complete. Launching workers. 00:52:45.027 ======================================================== 00:52:45.027 Latency(us) 00:52:45.027 Device Information : IOPS MiB/s Average min max 00:52:45.027 PCIE (0000:5f:00.0) NSID 1 from core 0: 99601.56 389.07 320.93 33.94 7180.38 00:52:45.027 ======================================================== 00:52:45.027 Total : 99601.56 389.07 320.93 33.94 7180.38 00:52:45.027 00:52:45.027 03:49:26 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:52:45.286 EAL: No free 2048 kB hugepages reported on node 1 00:52:46.660 Initializing NVMe Controllers 00:52:46.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:52:46.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:52:46.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:52:46.660 Initialization complete. Launching workers. 00:52:46.660 ======================================================== 00:52:46.660 Latency(us) 00:52:46.660 Device Information : IOPS MiB/s Average min max 00:52:46.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 143.00 0.56 7247.67 217.20 44686.03 00:52:46.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16464.88 6844.74 47897.27 00:52:46.660 ======================================================== 00:52:46.660 Total : 204.00 0.80 10003.80 217.20 47897.27 00:52:46.660 00:52:46.660 03:49:27 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:52:46.660 EAL: No free 2048 kB hugepages reported on node 1 00:52:48.034 Initializing NVMe Controllers 00:52:48.034 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:52:48.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:52:48.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:52:48.034 Initialization complete. Launching workers. 00:52:48.034 ======================================================== 00:52:48.034 Latency(us) 00:52:48.034 Device Information : IOPS MiB/s Average min max 00:52:48.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11127.87 43.47 2883.01 330.31 45231.45 00:52:48.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3857.37 15.07 8309.99 7190.91 16024.14 00:52:48.034 ======================================================== 00:52:48.034 Total : 14985.24 58.54 4279.98 330.31 45231.45 00:52:48.034 00:52:48.034 03:49:29 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:52:48.034 03:49:29 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:52:48.034 03:49:29 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:52:48.034 EAL: No free 2048 kB hugepages reported on node 1 00:52:50.566 Initializing NVMe Controllers 00:52:50.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:52:50.566 Controller IO queue size 128, less than required. 00:52:50.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:52:50.566 Controller IO queue size 128, less than required. 00:52:50.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:52:50.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:52:50.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:52:50.566 Initialization complete. Launching workers. 00:52:50.566 ======================================================== 00:52:50.566 Latency(us) 00:52:50.566 Device Information : IOPS MiB/s Average min max 00:52:50.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1164.81 291.20 113440.61 64036.21 187970.06 00:52:50.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 596.88 149.22 222678.87 62781.69 365326.80 00:52:50.566 ======================================================== 00:52:50.566 Total : 1761.69 440.42 150451.65 62781.69 365326.80 00:52:50.566 00:52:50.566 03:49:31 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:52:50.566 EAL: No free 2048 kB hugepages reported on node 1 00:52:50.566 No valid NVMe controllers or AIO or URING devices found 00:52:50.566 Initializing NVMe Controllers 00:52:50.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:52:50.566 Controller IO queue size 128, less than required. 00:52:50.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:52:50.566 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:52:50.566 Controller IO queue size 128, less than required. 00:52:50.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:52:50.566 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:52:50.566 WARNING: Some requested NVMe devices were skipped 00:52:50.566 03:49:31 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:52:50.825 EAL: No free 2048 kB hugepages reported on node 1 00:52:53.361 Initializing NVMe Controllers 00:52:53.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:52:53.361 Controller IO queue size 128, less than required. 00:52:53.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:52:53.361 Controller IO queue size 128, less than required. 00:52:53.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:52:53.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:52:53.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:52:53.361 Initialization complete. Launching workers. 00:52:53.361 00:52:53.361 ==================== 00:52:53.361 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:52:53.361 TCP transport: 00:52:53.361 polls: 28079 00:52:53.361 idle_polls: 9854 00:52:53.361 sock_completions: 18225 00:52:53.361 nvme_completions: 5337 00:52:53.361 submitted_requests: 7880 00:52:53.361 queued_requests: 1 00:52:53.361 00:52:53.361 ==================== 00:52:53.361 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:52:53.361 TCP transport: 00:52:53.361 polls: 33145 00:52:53.361 idle_polls: 15262 00:52:53.361 sock_completions: 17883 00:52:53.361 nvme_completions: 5365 00:52:53.361 submitted_requests: 8066 00:52:53.361 queued_requests: 1 00:52:53.361 ======================================================== 00:52:53.361 Latency(us) 00:52:53.361 Device Information : IOPS MiB/s Average min max 00:52:53.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1331.00 332.75 98346.87 51180.71 129499.36 00:52:53.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1337.99 334.50 98135.99 42072.19 151503.25 00:52:53.361 ======================================================== 00:52:53.361 Total : 2668.99 667.25 98241.15 42072.19 151503.25 00:52:53.361 00:52:53.361 03:49:34 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:52:53.361 03:49:34 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:52:53.361 03:49:34 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:52:53.361 03:49:34 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5f:00.0 ']' 00:52:53.361 03:49:34 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:52:58.632 03:49:39 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=16374018-fead-43ca-8d69-1273c887a248 00:52:58.632 03:49:39 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 16374018-fead-43ca-8d69-1273c887a248 00:52:58.632 03:49:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_uuid=16374018-fead-43ca-8d69-1273c887a248 00:52:58.632 03:49:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_info 00:52:58.632 03:49:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local fc 00:52:58.632 03:49:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local cs 00:52:58.632 03:49:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:52:58.632 03:49:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:52:58.632 { 00:52:58.632 "uuid": "16374018-fead-43ca-8d69-1273c887a248", 00:52:58.632 "name": "lvs_0", 00:52:58.632 "base_bdev": "Nvme0n1", 00:52:58.632 "total_data_clusters": 381173, 00:52:58.632 "free_clusters": 381173, 00:52:58.632 "block_size": 512, 00:52:58.632 "cluster_size": 4194304 00:52:58.632 } 00:52:58.632 ]' 00:52:58.632 03:49:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="16374018-fead-43ca-8d69-1273c887a248") .free_clusters' 00:52:58.890 03:49:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # fc=381173 00:52:58.890 03:49:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="16374018-fead-43ca-8d69-1273c887a248") .cluster_size' 00:52:58.890 03:49:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # cs=4194304 00:52:58.890 03:49:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1372 -- # free_mb=1524692 00:52:58.890 03:49:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # echo 1524692 00:52:58.890 1524692 00:52:58.890 03:49:40 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 1524692 -gt 20480 ']' 00:52:58.890 03:49:40 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:52:58.890 03:49:40 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 16374018-fead-43ca-8d69-1273c887a248 lbd_0 20480 00:52:58.890 03:49:40 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=895518f4-25f8-4a2c-a514-a2e35a36bd44 00:52:58.890 03:49:40 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 895518f4-25f8-4a2c-a514-a2e35a36bd44 lvs_n_0 00:53:00.265 03:49:41 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=e286be15-04a7-4c68-b1c5-d1a193131f54 00:53:00.265 03:49:41 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb e286be15-04a7-4c68-b1c5-d1a193131f54 00:53:00.265 03:49:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_uuid=e286be15-04a7-4c68-b1c5-d1a193131f54 00:53:00.265 03:49:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_info 00:53:00.265 03:49:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local fc 00:53:00.265 03:49:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local cs 00:53:00.265 03:49:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:53:00.523 03:49:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:53:00.523 { 00:53:00.523 "uuid": "16374018-fead-43ca-8d69-1273c887a248", 00:53:00.523 "name": "lvs_0", 00:53:00.523 "base_bdev": "Nvme0n1", 00:53:00.523 "total_data_clusters": 381173, 00:53:00.523 "free_clusters": 376053, 00:53:00.523 "block_size": 512, 00:53:00.523 "cluster_size": 4194304 00:53:00.523 }, 00:53:00.523 { 00:53:00.523 "uuid": "e286be15-04a7-4c68-b1c5-d1a193131f54", 00:53:00.523 "name": "lvs_n_0", 00:53:00.523 "base_bdev": "895518f4-25f8-4a2c-a514-a2e35a36bd44", 00:53:00.523 "total_data_clusters": 5114, 00:53:00.523 "free_clusters": 5114, 00:53:00.523 "block_size": 512, 00:53:00.523 "cluster_size": 4194304 00:53:00.523 } 00:53:00.523 ]' 00:53:00.523 03:49:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="e286be15-04a7-4c68-b1c5-d1a193131f54") .free_clusters' 00:53:00.523 03:49:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # fc=5114 00:53:00.523 03:49:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="e286be15-04a7-4c68-b1c5-d1a193131f54") .cluster_size' 00:53:00.523 03:49:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # cs=4194304 00:53:00.523 03:49:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1372 -- # free_mb=20456 00:53:00.523 03:49:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # echo 20456 00:53:00.523 20456 00:53:00.523 03:49:41 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:53:00.523 03:49:41 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e286be15-04a7-4c68-b1c5-d1a193131f54 lbd_nest_0 20456 00:53:00.782 03:49:42 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=f9768d4f-50a6-442f-a885-39b5483951c9 00:53:00.782 03:49:42 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:53:01.040 03:49:42 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:53:01.040 03:49:42 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 f9768d4f-50a6-442f-a885-39b5483951c9 00:53:01.040 03:49:42 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:01.298 03:49:42 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:53:01.298 03:49:42 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:53:01.298 03:49:42 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:53:01.298 03:49:42 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:53:01.298 03:49:42 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:53:01.298 EAL: No free 2048 kB hugepages reported on node 1 00:53:13.500 Initializing NVMe Controllers 00:53:13.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:53:13.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:53:13.500 Initialization complete. Launching workers. 00:53:13.500 ======================================================== 00:53:13.500 Latency(us) 00:53:13.500 Device Information : IOPS MiB/s Average min max 00:53:13.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 50.18 0.02 19927.64 159.50 45652.17 00:53:13.501 ======================================================== 00:53:13.501 Total : 50.18 0.02 19927.64 159.50 45652.17 00:53:13.501 00:53:13.501 03:49:53 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:53:13.501 03:49:53 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:53:13.501 EAL: No free 2048 kB hugepages reported on node 1 00:53:23.522 Initializing NVMe Controllers 00:53:23.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:53:23.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:53:23.522 Initialization complete. Launching workers. 00:53:23.522 ======================================================== 00:53:23.522 Latency(us) 00:53:23.522 Device Information : IOPS MiB/s Average min max 00:53:23.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 73.60 9.20 13598.09 6010.94 48863.08 00:53:23.522 ======================================================== 00:53:23.522 Total : 73.60 9.20 13598.09 6010.94 48863.08 00:53:23.522 00:53:23.522 03:50:03 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:53:23.522 03:50:03 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:53:23.522 03:50:03 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:53:23.522 EAL: No free 2048 kB hugepages reported on node 1 00:53:33.603 Initializing NVMe Controllers 00:53:33.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:53:33.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:53:33.603 Initialization complete. Launching workers. 00:53:33.603 ======================================================== 00:53:33.603 Latency(us) 00:53:33.603 Device Information : IOPS MiB/s Average min max 00:53:33.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8797.00 4.30 3637.61 233.76 9605.52 00:53:33.603 ======================================================== 00:53:33.603 Total : 8797.00 4.30 3637.61 233.76 9605.52 00:53:33.603 00:53:33.603 03:50:13 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:53:33.603 03:50:13 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:53:33.603 EAL: No free 2048 kB hugepages reported on node 1 00:53:43.586 Initializing NVMe Controllers 00:53:43.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:53:43.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:53:43.586 Initialization complete. Launching workers. 00:53:43.586 ======================================================== 00:53:43.586 Latency(us) 00:53:43.586 Device Information : IOPS MiB/s Average min max 00:53:43.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2545.10 318.14 12584.64 929.83 28860.59 00:53:43.586 ======================================================== 00:53:43.586 Total : 2545.10 318.14 12584.64 929.83 28860.59 00:53:43.586 00:53:43.586 03:50:23 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:53:43.586 03:50:23 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:53:43.586 03:50:23 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:53:43.586 EAL: No free 2048 kB hugepages reported on node 1 00:53:53.561 Initializing NVMe Controllers 00:53:53.561 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:53:53.561 Controller IO queue size 128, less than required. 00:53:53.561 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:53:53.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:53:53.561 Initialization complete. Launching workers. 00:53:53.561 ======================================================== 00:53:53.561 Latency(us) 00:53:53.561 Device Information : IOPS MiB/s Average min max 00:53:53.561 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15856.00 7.74 8076.52 1362.28 47840.03 00:53:53.561 ======================================================== 00:53:53.561 Total : 15856.00 7.74 8076.52 1362.28 47840.03 00:53:53.561 00:53:53.561 03:50:34 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:53:53.561 03:50:34 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:53:53.561 EAL: No free 2048 kB hugepages reported on node 1 00:54:03.539 Initializing NVMe Controllers 00:54:03.539 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:54:03.539 Controller IO queue size 128, less than required. 00:54:03.539 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:54:03.539 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:54:03.539 Initialization complete. Launching workers. 00:54:03.539 ======================================================== 00:54:03.539 Latency(us) 00:54:03.539 Device Information : IOPS MiB/s Average min max 00:54:03.539 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1204.90 150.61 106643.60 20096.93 218922.27 00:54:03.539 ======================================================== 00:54:03.539 Total : 1204.90 150.61 106643.60 20096.93 218922.27 00:54:03.539 00:54:03.539 03:50:44 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:54:03.539 03:50:44 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f9768d4f-50a6-442f-a885-39b5483951c9 00:54:04.106 03:50:45 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:54:04.389 03:50:45 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 895518f4-25f8-4a2c-a514-a2e35a36bd44 00:54:04.649 03:50:45 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:54:04.649 03:50:46 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:54:04.649 03:50:46 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:54:04.649 03:50:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:54:04.649 03:50:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:54:04.649 03:50:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:54:04.649 03:50:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:54:04.649 03:50:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:54:04.649 03:50:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:54:04.649 rmmod nvme_tcp 00:54:04.649 rmmod nvme_fabrics 00:54:04.649 rmmod nvme_keyring 00:54:04.908 03:50:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:54:04.908 03:50:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:54:04.908 03:50:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:54:04.908 03:50:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2319631 ']' 00:54:04.908 03:50:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2319631 00:54:04.908 03:50:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@949 -- # '[' -z 2319631 ']' 00:54:04.908 03:50:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # kill -0 2319631 00:54:04.908 03:50:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # uname 00:54:04.908 03:50:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:54:04.908 03:50:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2319631 00:54:04.908 03:50:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:54:04.908 03:50:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:54:04.908 03:50:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2319631' 00:54:04.908 killing process with pid 2319631 00:54:04.908 03:50:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@968 -- # kill 2319631 00:54:04.908 03:50:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@973 -- # wait 2319631 00:54:07.438 03:50:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:54:07.438 03:50:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:54:07.438 03:50:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:54:07.438 03:50:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:54:07.438 03:50:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:54:07.438 03:50:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:07.438 03:50:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:54:07.438 03:50:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:09.344 03:50:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:54:09.344 00:54:09.344 real 1m36.253s 00:54:09.344 user 5m44.701s 00:54:09.344 sys 0m15.623s 00:54:09.344 03:50:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:54:09.344 03:50:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:54:09.344 ************************************ 00:54:09.344 END TEST nvmf_perf 00:54:09.344 ************************************ 00:54:09.344 03:50:50 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:54:09.344 03:50:50 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:54:09.344 03:50:50 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:54:09.344 03:50:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:54:09.344 ************************************ 00:54:09.344 START TEST nvmf_fio_host 00:54:09.344 ************************************ 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:54:09.344 * Looking for test storage... 00:54:09.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:09.344 03:50:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:54:09.345 03:50:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:54:14.615 Found 0000:86:00.0 (0x8086 - 0x159b) 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:54:14.615 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:54:14.616 Found 0000:86:00.1 (0x8086 - 0x159b) 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:54:14.616 Found net devices under 0000:86:00.0: cvl_0_0 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:54:14.616 Found net devices under 0000:86:00.1: cvl_0_1 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:54:14.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:14.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:54:14.616 00:54:14.616 --- 10.0.0.2 ping statistics --- 00:54:14.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:14.616 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:54:14.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:14.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:54:14.616 00:54:14.616 --- 10.0.0.1 ping statistics --- 00:54:14.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:14.616 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:54:14.616 03:50:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:54:14.875 03:50:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:54:14.875 03:50:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:54:14.875 03:50:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:54:14.875 03:50:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:54:14.875 03:50:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2337441 00:54:14.876 03:50:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:54:14.876 03:50:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:54:14.876 03:50:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2337441 00:54:14.876 03:50:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # '[' -z 2337441 ']' 00:54:14.876 03:50:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:14.876 03:50:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:54:14.876 03:50:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:14.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:14.876 03:50:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:54:14.876 03:50:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:54:14.876 [2024-06-11 03:50:56.083007] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:54:14.876 [2024-06-11 03:50:56.083053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:14.876 EAL: No free 2048 kB hugepages reported on node 1 00:54:14.876 [2024-06-11 03:50:56.149835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:54:14.876 [2024-06-11 03:50:56.191919] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:14.876 [2024-06-11 03:50:56.191961] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:14.876 [2024-06-11 03:50:56.191971] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:14.876 [2024-06-11 03:50:56.191978] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:14.876 [2024-06-11 03:50:56.191984] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:14.876 [2024-06-11 03:50:56.192042] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:54:14.876 [2024-06-11 03:50:56.192060] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:54:14.876 [2024-06-11 03:50:56.192133] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:54:14.876 [2024-06-11 03:50:56.192136] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:54:15.848 03:50:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:54:15.848 03:50:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@863 -- # return 0 00:54:15.848 03:50:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:54:15.848 [2024-06-11 03:50:57.023366] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:15.848 03:50:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:54:15.848 03:50:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:54:15.848 03:50:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:54:15.848 03:50:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:54:16.127 Malloc1 00:54:16.127 03:50:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:54:16.127 03:50:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:54:16.385 03:50:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:54:16.643 [2024-06-11 03:50:57.805627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:16.643 03:50:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:54:16.643 03:50:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:54:16.643 03:50:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:54:16.644 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:54:16.901 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:54:16.901 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:54:16.901 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:54:16.901 03:50:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:54:17.158 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:54:17.158 fio-3.35 00:54:17.158 Starting 1 thread 00:54:17.158 EAL: No free 2048 kB hugepages reported on node 1 00:54:19.692 00:54:19.692 test: (groupid=0, jobs=1): err= 0: pid=2337836: Tue Jun 11 03:51:00 2024 00:54:19.692 read: IOPS=12.2k, BW=47.6MiB/s (50.0MB/s)(95.5MiB/2005msec) 00:54:19.692 slat (nsec): min=1502, max=237836, avg=1733.45, stdev=2193.82 00:54:19.692 clat (usec): min=3005, max=9981, avg=5794.19, stdev=479.34 00:54:19.692 lat (usec): min=3036, max=9982, avg=5795.92, stdev=479.35 00:54:19.692 clat percentiles (usec): 00:54:19.692 | 1.00th=[ 4686], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:54:19.692 | 30.00th=[ 5604], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5866], 00:54:19.692 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6521], 00:54:19.692 | 99.00th=[ 7046], 99.50th=[ 7767], 99.90th=[ 8586], 99.95th=[ 8717], 00:54:19.692 | 99.99th=[ 9634] 00:54:19.692 bw ( KiB/s): min=47976, max=49448, per=99.95%, avg=48764.00, stdev=671.67, samples=4 00:54:19.692 iops : min=11994, max=12362, avg=12191.00, stdev=167.92, samples=4 00:54:19.692 write: IOPS=12.2k, BW=47.5MiB/s (49.8MB/s)(95.2MiB/2005msec); 0 zone resets 00:54:19.692 slat (nsec): min=1563, max=224069, avg=1824.27, stdev=1640.71 00:54:19.692 clat (usec): min=2450, max=8826, avg=4676.96, stdev=398.33 00:54:19.692 lat (usec): min=2465, max=8828, avg=4678.79, stdev=398.41 00:54:19.692 clat percentiles (usec): 00:54:19.692 | 1.00th=[ 3818], 5.00th=[ 4080], 10.00th=[ 4228], 20.00th=[ 4359], 00:54:19.692 | 30.00th=[ 4490], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4752], 00:54:19.692 | 70.00th=[ 4817], 80.00th=[ 4948], 90.00th=[ 5080], 95.00th=[ 5211], 00:54:19.692 | 99.00th=[ 5669], 99.50th=[ 6521], 99.90th=[ 7504], 99.95th=[ 7832], 00:54:19.692 | 99.99th=[ 8848] 00:54:19.692 bw ( KiB/s): min=48320, max=48960, per=100.00%, avg=48624.00, stdev=264.85, samples=4 00:54:19.692 iops : min=12080, max=12240, avg=12156.00, stdev=66.21, samples=4 00:54:19.692 lat (msec) : 4=1.58%, 10=98.42% 00:54:19.692 cpu : usr=71.11%, sys=25.70%, ctx=113, majf=0, minf=28 00:54:19.692 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:54:19.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:19.692 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:54:19.692 issued rwts: total=24455,24366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:19.692 latency : target=0, window=0, percentile=100.00%, depth=128 00:54:19.692 00:54:19.692 Run status group 0 (all jobs): 00:54:19.692 READ: bw=47.6MiB/s (50.0MB/s), 47.6MiB/s-47.6MiB/s (50.0MB/s-50.0MB/s), io=95.5MiB (100MB), run=2005-2005msec 00:54:19.692 WRITE: bw=47.5MiB/s (49.8MB/s), 47.5MiB/s-47.5MiB/s (49.8MB/s-49.8MB/s), io=95.2MiB (99.8MB), run=2005-2005msec 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:54:19.692 03:51:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:54:19.692 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:54:19.692 fio-3.35 00:54:19.692 Starting 1 thread 00:54:19.692 EAL: No free 2048 kB hugepages reported on node 1 00:54:22.223 00:54:22.223 test: (groupid=0, jobs=1): err= 0: pid=2338462: Tue Jun 11 03:51:03 2024 00:54:22.223 read: IOPS=10.8k, BW=169MiB/s (177MB/s)(339MiB/2006msec) 00:54:22.223 slat (nsec): min=2533, max=82299, avg=2875.33, stdev=1186.19 00:54:22.223 clat (usec): min=1419, max=13192, avg=7008.71, stdev=1730.67 00:54:22.223 lat (usec): min=1421, max=13195, avg=7011.58, stdev=1730.74 00:54:22.223 clat percentiles (usec): 00:54:22.223 | 1.00th=[ 3589], 5.00th=[ 4424], 10.00th=[ 4883], 20.00th=[ 5473], 00:54:22.223 | 30.00th=[ 5997], 40.00th=[ 6456], 50.00th=[ 6980], 60.00th=[ 7504], 00:54:22.223 | 70.00th=[ 7898], 80.00th=[ 8291], 90.00th=[ 9110], 95.00th=[10028], 00:54:22.223 | 99.00th=[11994], 99.50th=[12387], 99.90th=[12911], 99.95th=[13042], 00:54:22.223 | 99.99th=[13173] 00:54:22.223 bw ( KiB/s): min=77152, max=92288, per=49.87%, avg=86368.00, stdev=6815.47, samples=4 00:54:22.223 iops : min= 4822, max= 5768, avg=5398.00, stdev=425.97, samples=4 00:54:22.223 write: IOPS=6394, BW=99.9MiB/s (105MB/s)(176MiB/1763msec); 0 zone resets 00:54:22.223 slat (usec): min=29, max=251, avg=32.28, stdev= 5.14 00:54:22.223 clat (usec): min=3439, max=14023, avg=8385.07, stdev=1489.55 00:54:22.223 lat (usec): min=3468, max=14055, avg=8417.34, stdev=1490.03 00:54:22.223 clat percentiles (usec): 00:54:22.223 | 1.00th=[ 5538], 5.00th=[ 6259], 10.00th=[ 6718], 20.00th=[ 7177], 00:54:22.223 | 30.00th=[ 7504], 40.00th=[ 7832], 50.00th=[ 8160], 60.00th=[ 8586], 00:54:22.223 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[10552], 95.00th=[11207], 00:54:22.223 | 99.00th=[12518], 99.50th=[13042], 99.90th=[13698], 99.95th=[13698], 00:54:22.223 | 99.99th=[13960] 00:54:22.223 bw ( KiB/s): min=80608, max=95488, per=87.95%, avg=89984.00, stdev=6715.88, samples=4 00:54:22.223 iops : min= 5038, max= 5968, avg=5624.00, stdev=419.74, samples=4 00:54:22.223 lat (msec) : 2=0.04%, 4=1.49%, 10=90.09%, 20=8.38% 00:54:22.223 cpu : usr=85.39%, sys=13.42%, ctx=33, majf=0, minf=54 00:54:22.223 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:54:22.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:22.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:54:22.223 issued rwts: total=21712,11273,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:22.223 latency : target=0, window=0, percentile=100.00%, depth=128 00:54:22.223 00:54:22.223 Run status group 0 (all jobs): 00:54:22.223 READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=339MiB (356MB), run=2006-2006msec 00:54:22.223 WRITE: bw=99.9MiB/s (105MB/s), 99.9MiB/s-99.9MiB/s (105MB/s-105MB/s), io=176MiB (185MB), run=1763-1763msec 00:54:22.223 03:51:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:54:22.223 03:51:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:54:22.223 03:51:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:54:22.223 03:51:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:54:22.223 03:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1512 -- # bdfs=() 00:54:22.223 03:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1512 -- # local bdfs 00:54:22.223 03:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:54:22.223 03:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:54:22.223 03:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:54:22.223 03:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:54:22.223 03:51:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:5f:00.0 00:54:22.223 03:51:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5f:00.0 -i 10.0.0.2 00:54:25.507 Nvme0n1 00:54:25.507 03:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:54:30.775 03:51:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=79efc2e4-c38a-461e-a9e4-9fd7cb74577c 00:54:30.775 03:51:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 79efc2e4-c38a-461e-a9e4-9fd7cb74577c 00:54:30.775 03:51:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_uuid=79efc2e4-c38a-461e-a9e4-9fd7cb74577c 00:54:30.775 03:51:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_info 00:54:30.775 03:51:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local fc 00:54:30.776 03:51:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local cs 00:54:30.776 03:51:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:54:30.776 03:51:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:54:30.776 { 00:54:30.776 "uuid": "79efc2e4-c38a-461e-a9e4-9fd7cb74577c", 00:54:30.776 "name": "lvs_0", 00:54:30.776 "base_bdev": "Nvme0n1", 00:54:30.776 "total_data_clusters": 1489, 00:54:30.776 "free_clusters": 1489, 00:54:30.776 "block_size": 512, 00:54:30.776 "cluster_size": 1073741824 00:54:30.776 } 00:54:30.776 ]' 00:54:30.776 03:51:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="79efc2e4-c38a-461e-a9e4-9fd7cb74577c") .free_clusters' 00:54:30.776 03:51:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # fc=1489 00:54:30.776 03:51:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="79efc2e4-c38a-461e-a9e4-9fd7cb74577c") .cluster_size' 00:54:30.776 03:51:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # cs=1073741824 00:54:30.776 03:51:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1372 -- # free_mb=1524736 00:54:30.776 03:51:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # echo 1524736 00:54:30.776 1524736 00:54:30.776 03:51:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1524736 00:54:30.776 62eef544-8d8e-42d9-b110-c33cb5a3a8f1 00:54:30.776 03:51:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:54:30.776 03:51:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:54:30.776 03:51:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:54:31.034 03:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:54:31.292 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:54:31.292 fio-3.35 00:54:31.292 Starting 1 thread 00:54:31.292 EAL: No free 2048 kB hugepages reported on node 1 00:54:33.823 [2024-06-11 03:51:15.019474] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x84e3e0 is same with the state(5) to be set 00:54:33.823 00:54:33.823 test: (groupid=0, jobs=1): err= 0: pid=2340897: Tue Jun 11 03:51:15 2024 00:54:33.823 read: IOPS=7762, BW=30.3MiB/s (31.8MB/s)(60.8MiB/2006msec) 00:54:33.823 slat (nsec): min=1543, max=96995, avg=1680.47, stdev=1129.80 00:54:33.823 clat (usec): min=372, max=269685, avg=8912.91, stdev=16014.04 00:54:33.823 lat (usec): min=374, max=269688, avg=8914.59, stdev=16014.10 00:54:33.823 clat percentiles (msec): 00:54:33.823 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:54:33.823 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 9], 00:54:33.823 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:54:33.823 | 99.00th=[ 10], 99.50th=[ 11], 99.90th=[ 271], 99.95th=[ 271], 00:54:33.823 | 99.99th=[ 271] 00:54:33.823 bw ( KiB/s): min=16152, max=36200, per=99.88%, avg=31014.00, stdev=9913.43, samples=4 00:54:33.823 iops : min= 4038, max= 9050, avg=7753.50, stdev=2478.36, samples=4 00:54:33.823 write: IOPS=7749, BW=30.3MiB/s (31.7MB/s)(60.7MiB/2006msec); 0 zone resets 00:54:33.823 slat (nsec): min=1593, max=100300, avg=1776.04, stdev=942.69 00:54:33.823 clat (usec): min=291, max=268301, avg=7496.15, stdev=17103.93 00:54:33.823 lat (usec): min=293, max=268307, avg=7497.93, stdev=17104.08 00:54:33.823 clat percentiles (msec): 00:54:33.823 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:54:33.823 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:54:33.823 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:54:33.823 | 99.00th=[ 8], 99.50th=[ 12], 99.90th=[ 268], 99.95th=[ 268], 00:54:33.823 | 99.99th=[ 268] 00:54:33.823 bw ( KiB/s): min=17232, max=35784, per=99.90%, avg=30966.00, stdev=9157.69, samples=4 00:54:33.823 iops : min= 4308, max= 8946, avg=7741.50, stdev=2289.42, samples=4 00:54:33.823 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.01% 00:54:33.823 lat (msec) : 2=0.09%, 4=0.23%, 10=99.04%, 20=0.17%, 500=0.41% 00:54:33.823 cpu : usr=70.52%, sys=27.43%, ctx=43, majf=0, minf=33 00:54:33.823 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:54:33.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:33.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:54:33.823 issued rwts: total=15572,15545,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:33.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:54:33.823 00:54:33.823 Run status group 0 (all jobs): 00:54:33.823 READ: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=60.8MiB (63.8MB), run=2006-2006msec 00:54:33.823 WRITE: bw=30.3MiB/s (31.7MB/s), 30.3MiB/s-30.3MiB/s (31.7MB/s-31.7MB/s), io=60.7MiB (63.7MB), run=2006-2006msec 00:54:33.823 03:51:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:54:34.082 03:51:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:54:35.017 03:51:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=21299c58-e3c3-41df-ab47-51443515be79 00:54:35.017 03:51:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 21299c58-e3c3-41df-ab47-51443515be79 00:54:35.017 03:51:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_uuid=21299c58-e3c3-41df-ab47-51443515be79 00:54:35.017 03:51:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_info 00:54:35.017 03:51:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local fc 00:54:35.018 03:51:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local cs 00:54:35.018 03:51:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:54:35.018 03:51:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # lvs_info='[ 00:54:35.018 { 00:54:35.018 "uuid": "79efc2e4-c38a-461e-a9e4-9fd7cb74577c", 00:54:35.018 "name": "lvs_0", 00:54:35.018 "base_bdev": "Nvme0n1", 00:54:35.018 "total_data_clusters": 1489, 00:54:35.018 "free_clusters": 0, 00:54:35.018 "block_size": 512, 00:54:35.018 "cluster_size": 1073741824 00:54:35.018 }, 00:54:35.018 { 00:54:35.018 "uuid": "21299c58-e3c3-41df-ab47-51443515be79", 00:54:35.018 "name": "lvs_n_0", 00:54:35.018 "base_bdev": "62eef544-8d8e-42d9-b110-c33cb5a3a8f1", 00:54:35.018 "total_data_clusters": 380811, 00:54:35.018 "free_clusters": 380811, 00:54:35.018 "block_size": 512, 00:54:35.018 "cluster_size": 4194304 00:54:35.018 } 00:54:35.018 ]' 00:54:35.018 03:51:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="21299c58-e3c3-41df-ab47-51443515be79") .free_clusters' 00:54:35.018 03:51:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # fc=380811 00:54:35.018 03:51:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="21299c58-e3c3-41df-ab47-51443515be79") .cluster_size' 00:54:35.276 03:51:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # cs=4194304 00:54:35.276 03:51:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1372 -- # free_mb=1523244 00:54:35.276 03:51:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # echo 1523244 00:54:35.276 1523244 00:54:35.276 03:51:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1523244 00:54:35.843 9d9abbf1-f984-411f-9a23-ed60cf9e279c 00:54:35.843 03:51:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:54:36.101 03:51:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:54:36.360 03:51:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1359 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local sanitizers 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # shift 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local asan_lib= 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libasan 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # awk '{print $3}' 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # asan_lib= 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:54:36.648 03:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:54:36.908 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:54:36.908 fio-3.35 00:54:36.908 Starting 1 thread 00:54:36.908 EAL: No free 2048 kB hugepages reported on node 1 00:54:39.426 00:54:39.426 test: (groupid=0, jobs=1): err= 0: pid=2341918: Tue Jun 11 03:51:20 2024 00:54:39.426 read: IOPS=7862, BW=30.7MiB/s (32.2MB/s)(61.6MiB/2007msec) 00:54:39.426 slat (nsec): min=1555, max=107760, avg=1704.94, stdev=1121.14 00:54:39.426 clat (usec): min=3005, max=13629, avg=8950.54, stdev=791.55 00:54:39.426 lat (usec): min=3023, max=13630, avg=8952.25, stdev=791.49 00:54:39.426 clat percentiles (usec): 00:54:39.426 | 1.00th=[ 7177], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8356], 00:54:39.426 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:54:39.426 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10159], 00:54:39.426 | 99.00th=[10814], 99.50th=[11731], 99.90th=[12518], 99.95th=[13042], 00:54:39.426 | 99.99th=[13566] 00:54:39.426 bw ( KiB/s): min=30056, max=32152, per=99.93%, avg=31430.00, stdev=947.35, samples=4 00:54:39.426 iops : min= 7514, max= 8038, avg=7857.50, stdev=236.84, samples=4 00:54:39.426 write: IOPS=7836, BW=30.6MiB/s (32.1MB/s)(61.4MiB/2007msec); 0 zone resets 00:54:39.426 slat (nsec): min=1598, max=79570, avg=1795.01, stdev=779.36 00:54:39.426 clat (usec): min=1446, max=12567, avg=7208.06, stdev=688.15 00:54:39.426 lat (usec): min=1454, max=12569, avg=7209.86, stdev=688.14 00:54:39.426 clat percentiles (usec): 00:54:39.426 | 1.00th=[ 5669], 5.00th=[ 6194], 10.00th=[ 6456], 20.00th=[ 6718], 00:54:39.426 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7373], 00:54:39.426 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8225], 00:54:39.426 | 99.00th=[ 8979], 99.50th=[ 9896], 99.90th=[11338], 99.95th=[12518], 00:54:39.426 | 99.99th=[12518] 00:54:39.426 bw ( KiB/s): min=31112, max=31496, per=99.96%, avg=31332.00, stdev=191.16, samples=4 00:54:39.427 iops : min= 7778, max= 7874, avg=7833.00, stdev=47.79, samples=4 00:54:39.427 lat (msec) : 2=0.01%, 4=0.11%, 10=95.84%, 20=4.04% 00:54:39.427 cpu : usr=70.39%, sys=27.27%, ctx=121, majf=0, minf=33 00:54:39.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:54:39.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:39.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:54:39.427 issued rwts: total=15781,15727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:39.427 latency : target=0, window=0, percentile=100.00%, depth=128 00:54:39.427 00:54:39.427 Run status group 0 (all jobs): 00:54:39.427 READ: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=61.6MiB (64.6MB), run=2007-2007msec 00:54:39.427 WRITE: bw=30.6MiB/s (32.1MB/s), 30.6MiB/s-30.6MiB/s (32.1MB/s-32.1MB/s), io=61.4MiB (64.4MB), run=2007-2007msec 00:54:39.427 03:51:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:54:39.427 03:51:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:54:39.427 03:51:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:54:45.965 03:51:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:54:45.965 03:51:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:54:50.171 03:51:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:54:50.171 03:51:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:54:52.692 03:51:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:54:52.692 03:51:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:54:52.692 03:51:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:54:52.692 03:51:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:54:52.692 03:51:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:54:52.692 03:51:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:54:52.692 03:51:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:54:52.692 03:51:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:54:52.692 03:51:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:54:52.692 rmmod nvme_tcp 00:54:52.692 rmmod nvme_fabrics 00:54:52.692 rmmod nvme_keyring 00:54:52.692 03:51:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:54:52.692 03:51:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:54:52.692 03:51:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:54:52.692 03:51:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2337441 ']' 00:54:52.692 03:51:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2337441 00:54:52.692 03:51:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@949 -- # '[' -z 2337441 ']' 00:54:52.692 03:51:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # kill -0 2337441 00:54:52.692 03:51:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # uname 00:54:52.692 03:51:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:54:52.692 03:51:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2337441 00:54:52.692 03:51:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:54:52.692 03:51:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:54:52.692 03:51:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2337441' 00:54:52.692 killing process with pid 2337441 00:54:52.692 03:51:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@968 -- # kill 2337441 00:54:52.692 03:51:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@973 -- # wait 2337441 00:54:52.950 03:51:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:54:52.950 03:51:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:54:52.950 03:51:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:54:52.950 03:51:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:54:52.950 03:51:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:54:52.950 03:51:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:52.950 03:51:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:54:52.950 03:51:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:55.481 03:51:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:54:55.481 00:54:55.481 real 0m45.942s 00:54:55.481 user 3m4.156s 00:54:55.481 sys 0m8.603s 00:54:55.481 03:51:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:54:55.481 03:51:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:54:55.481 ************************************ 00:54:55.481 END TEST nvmf_fio_host 00:54:55.481 ************************************ 00:54:55.481 03:51:36 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:54:55.481 03:51:36 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:54:55.481 03:51:36 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:54:55.481 03:51:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:54:55.481 ************************************ 00:54:55.481 START TEST nvmf_failover 00:54:55.481 ************************************ 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:54:55.481 * Looking for test storage... 00:54:55.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:54:55.481 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:54:55.482 03:51:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:54:55.482 03:51:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:55:02.037 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:55:02.038 Found 0000:86:00.0 (0x8086 - 0x159b) 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:55:02.038 Found 0000:86:00.1 (0x8086 - 0x159b) 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:55:02.038 Found net devices under 0000:86:00.0: cvl_0_0 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:55:02.038 Found net devices under 0000:86:00.1: cvl_0_1 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:55:02.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:02.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:55:02.038 00:55:02.038 --- 10.0.0.2 ping statistics --- 00:55:02.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:02.038 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:55:02.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:02.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:55:02.038 00:55:02.038 --- 10.0.0.1 ping statistics --- 00:55:02.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:02.038 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@723 -- # xtrace_disable 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2348255 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2348255 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 2348255 ']' 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:02.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:55:02.038 03:51:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:55:02.038 [2024-06-11 03:51:42.641666] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:55:02.038 [2024-06-11 03:51:42.641706] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:02.038 EAL: No free 2048 kB hugepages reported on node 1 00:55:02.038 [2024-06-11 03:51:42.705652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:55:02.038 [2024-06-11 03:51:42.746497] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:02.038 [2024-06-11 03:51:42.746533] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:02.039 [2024-06-11 03:51:42.746540] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:02.039 [2024-06-11 03:51:42.746546] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:02.039 [2024-06-11 03:51:42.746551] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:02.039 [2024-06-11 03:51:42.746663] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:55:02.039 [2024-06-11 03:51:42.746749] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:55:02.039 [2024-06-11 03:51:42.746750] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:55:02.039 03:51:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:55:02.039 03:51:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:55:02.039 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:55:02.039 03:51:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@729 -- # xtrace_disable 00:55:02.039 03:51:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:55:02.039 03:51:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:02.039 03:51:42 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:55:02.039 [2024-06-11 03:51:43.020134] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:02.039 03:51:43 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:55:02.039 Malloc0 00:55:02.039 03:51:43 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:55:02.039 03:51:43 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:55:02.295 03:51:43 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:55:02.552 [2024-06-11 03:51:43.778083] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:02.552 03:51:43 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:55:02.552 [2024-06-11 03:51:43.950529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:55:02.810 03:51:43 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:55:02.810 [2024-06-11 03:51:44.123080] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:55:02.810 03:51:44 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:55:02.810 03:51:44 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2348511 00:55:02.810 03:51:44 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:55:02.810 03:51:44 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2348511 /var/tmp/bdevperf.sock 00:55:02.810 03:51:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 2348511 ']' 00:55:02.810 03:51:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:55:02.810 03:51:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:55:02.810 03:51:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:55:02.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:55:02.810 03:51:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:55:02.810 03:51:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:55:03.067 03:51:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:55:03.067 03:51:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:55:03.067 03:51:44 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:55:03.325 NVMe0n1 00:55:03.325 03:51:44 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:55:03.582 00:55:03.839 03:51:44 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2348737 00:55:03.839 03:51:44 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:55:03.839 03:51:44 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:55:04.774 03:51:46 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:55:04.774 [2024-06-11 03:51:46.175963] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176022] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176034] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176041] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176047] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176054] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176061] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176067] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176073] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176080] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176089] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176098] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176106] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176117] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176123] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176128] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176134] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176140] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176146] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176151] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176157] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176162] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176168] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176173] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176179] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176184] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176191] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176196] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176202] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176207] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176213] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176218] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176224] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176229] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176235] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176242] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176248] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176254] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176259] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176265] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176273] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176278] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176284] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176289] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176295] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176300] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176306] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176311] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176316] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176322] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176327] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176333] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176339] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176344] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176351] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176357] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176363] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:04.774 [2024-06-11 03:51:46.176368] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af39b0 is same with the state(5) to be set 00:55:05.032 03:51:46 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:55:08.312 03:51:49 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:55:08.312 00:55:08.312 03:51:49 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:55:08.569 03:51:49 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:55:11.846 03:51:52 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:55:11.847 [2024-06-11 03:51:52.952377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:11.847 03:51:52 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:55:12.780 03:51:53 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:55:12.780 [2024-06-11 03:51:54.145250] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af5ed0 is same with the state(5) to be set 00:55:12.780 [2024-06-11 03:51:54.145296] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af5ed0 is same with the state(5) to be set 00:55:12.780 [2024-06-11 03:51:54.145303] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af5ed0 is same with the state(5) to be set 00:55:12.780 [2024-06-11 03:51:54.145309] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af5ed0 is same with the state(5) to be set 00:55:12.780 [2024-06-11 03:51:54.145315] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af5ed0 is same with the state(5) to be set 00:55:12.780 [2024-06-11 03:51:54.145320] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af5ed0 is same with the state(5) to be set 00:55:12.780 [2024-06-11 03:51:54.145326] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af5ed0 is same with the state(5) to be set 00:55:12.780 [2024-06-11 03:51:54.145332] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af5ed0 is same with the state(5) to be set 00:55:12.780 [2024-06-11 03:51:54.145337] tcp.c:1602:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af5ed0 is same with the state(5) to be set 00:55:12.780 03:51:54 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2348737 00:55:19.341 0 00:55:19.341 03:52:00 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2348511 00:55:19.341 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 2348511 ']' 00:55:19.341 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 2348511 00:55:19.341 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:55:19.341 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:55:19.341 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2348511 00:55:19.341 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:55:19.341 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:55:19.341 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2348511' 00:55:19.341 killing process with pid 2348511 00:55:19.341 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 2348511 00:55:19.341 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 2348511 00:55:19.341 03:52:00 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:55:19.341 [2024-06-11 03:51:44.190374] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:55:19.341 [2024-06-11 03:51:44.190427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2348511 ] 00:55:19.341 EAL: No free 2048 kB hugepages reported on node 1 00:55:19.341 [2024-06-11 03:51:44.251363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:19.341 [2024-06-11 03:51:44.291800] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:55:19.341 Running I/O for 15 seconds... 00:55:19.341 [2024-06-11 03:51:46.177270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.341 [2024-06-11 03:51:46.177312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.341 [2024-06-11 03:51:46.177334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.177986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.177996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.178007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.178022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.178033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.178048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.178060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.178069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.178081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.178091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.178102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.178113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.178128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.178138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.178150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.178160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.178171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.178182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.342 [2024-06-11 03:51:46.178193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.342 [2024-06-11 03:51:46.178203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.343 [2024-06-11 03:51:46.178640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.343 [2024-06-11 03:51:46.178662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.343 [2024-06-11 03:51:46.178685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.343 [2024-06-11 03:51:46.178707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.343 [2024-06-11 03:51:46.178728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.343 [2024-06-11 03:51:46.178751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.343 [2024-06-11 03:51:46.178773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.343 [2024-06-11 03:51:46.178794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.343 [2024-06-11 03:51:46.178815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.343 [2024-06-11 03:51:46.178836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.343 [2024-06-11 03:51:46.178857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.343 [2024-06-11 03:51:46.178879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.343 [2024-06-11 03:51:46.178901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.343 [2024-06-11 03:51:46.178923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.343 [2024-06-11 03:51:46.178944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.343 [2024-06-11 03:51:46.178972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.343 [2024-06-11 03:51:46.178984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.343 [2024-06-11 03:51:46.178994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.344 [2024-06-11 03:51:46.179697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.344 [2024-06-11 03:51:46.179750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99064 len:8 PRP1 0x0 PRP2 0x0 00:55:19.344 [2024-06-11 03:51:46.179760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.344 [2024-06-11 03:51:46.179781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.344 [2024-06-11 03:51:46.179789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99072 len:8 PRP1 0x0 PRP2 0x0 00:55:19.344 [2024-06-11 03:51:46.179799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.344 [2024-06-11 03:51:46.179816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.344 [2024-06-11 03:51:46.179828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99080 len:8 PRP1 0x0 PRP2 0x0 00:55:19.344 [2024-06-11 03:51:46.179839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.344 [2024-06-11 03:51:46.179849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.344 [2024-06-11 03:51:46.179858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.344 [2024-06-11 03:51:46.179868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99088 len:8 PRP1 0x0 PRP2 0x0 00:55:19.344 [2024-06-11 03:51:46.179878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.179888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.345 [2024-06-11 03:51:46.179896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.345 [2024-06-11 03:51:46.179905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99096 len:8 PRP1 0x0 PRP2 0x0 00:55:19.345 [2024-06-11 03:51:46.179915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.179926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.345 [2024-06-11 03:51:46.179934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.345 [2024-06-11 03:51:46.179942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99104 len:8 PRP1 0x0 PRP2 0x0 00:55:19.345 [2024-06-11 03:51:46.179952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.179962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.345 [2024-06-11 03:51:46.179970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.345 [2024-06-11 03:51:46.179978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99112 len:8 PRP1 0x0 PRP2 0x0 00:55:19.345 [2024-06-11 03:51:46.179988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.179997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.345 [2024-06-11 03:51:46.180005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.345 [2024-06-11 03:51:46.180019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99120 len:8 PRP1 0x0 PRP2 0x0 00:55:19.345 [2024-06-11 03:51:46.180029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.345 [2024-06-11 03:51:46.180047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.345 [2024-06-11 03:51:46.180055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99128 len:8 PRP1 0x0 PRP2 0x0 00:55:19.345 [2024-06-11 03:51:46.180066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.345 [2024-06-11 03:51:46.180084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.345 [2024-06-11 03:51:46.180093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99136 len:8 PRP1 0x0 PRP2 0x0 00:55:19.345 [2024-06-11 03:51:46.180102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.345 [2024-06-11 03:51:46.180123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.345 [2024-06-11 03:51:46.180132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99144 len:8 PRP1 0x0 PRP2 0x0 00:55:19.345 [2024-06-11 03:51:46.180141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.345 [2024-06-11 03:51:46.180160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.345 [2024-06-11 03:51:46.180168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99152 len:8 PRP1 0x0 PRP2 0x0 00:55:19.345 [2024-06-11 03:51:46.180178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.345 [2024-06-11 03:51:46.180196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.345 [2024-06-11 03:51:46.180206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99160 len:8 PRP1 0x0 PRP2 0x0 00:55:19.345 [2024-06-11 03:51:46.180216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.345 [2024-06-11 03:51:46.180233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.345 [2024-06-11 03:51:46.180242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99168 len:8 PRP1 0x0 PRP2 0x0 00:55:19.345 [2024-06-11 03:51:46.180252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.345 [2024-06-11 03:51:46.180269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.345 [2024-06-11 03:51:46.180278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99176 len:8 PRP1 0x0 PRP2 0x0 00:55:19.345 [2024-06-11 03:51:46.180287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.345 [2024-06-11 03:51:46.180306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.345 [2024-06-11 03:51:46.180314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99184 len:8 PRP1 0x0 PRP2 0x0 00:55:19.345 [2024-06-11 03:51:46.180324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.345 [2024-06-11 03:51:46.180342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.345 [2024-06-11 03:51:46.180350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99192 len:8 PRP1 0x0 PRP2 0x0 00:55:19.345 [2024-06-11 03:51:46.180360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.345 [2024-06-11 03:51:46.180377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.345 [2024-06-11 03:51:46.180385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99200 len:8 PRP1 0x0 PRP2 0x0 00:55:19.345 [2024-06-11 03:51:46.180395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.345 [2024-06-11 03:51:46.180416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.345 [2024-06-11 03:51:46.180426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99208 len:8 PRP1 0x0 PRP2 0x0 00:55:19.345 [2024-06-11 03:51:46.180435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.345 [2024-06-11 03:51:46.180453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.345 [2024-06-11 03:51:46.180461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98672 len:8 PRP1 0x0 PRP2 0x0 00:55:19.345 [2024-06-11 03:51:46.180471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180518] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x209ae90 was disconnected and freed. reset controller. 00:55:19.345 [2024-06-11 03:51:46.180532] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:55:19.345 [2024-06-11 03:51:46.180559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:55:19.345 [2024-06-11 03:51:46.180570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:55:19.345 [2024-06-11 03:51:46.180592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:55:19.345 [2024-06-11 03:51:46.180612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:55:19.345 [2024-06-11 03:51:46.180632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:46.180642] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:55:19.345 [2024-06-11 03:51:46.180685] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207d040 (9): Bad file descriptor 00:55:19.345 [2024-06-11 03:51:46.183755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:55:19.345 [2024-06-11 03:51:46.212957] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:55:19.345 [2024-06-11 03:51:49.764213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.345 [2024-06-11 03:51:49.764257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:49.764276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.345 [2024-06-11 03:51:49.764286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:49.764297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.345 [2024-06-11 03:51:49.764305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.345 [2024-06-11 03:51:49.764320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.345 [2024-06-11 03:51:49.764329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.346 [2024-06-11 03:51:49.764348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.346 [2024-06-11 03:51:49.764368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.346 [2024-06-11 03:51:49.764387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.346 [2024-06-11 03:51:49.764407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.346 [2024-06-11 03:51:49.764428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.346 [2024-06-11 03:51:49.764448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.346 [2024-06-11 03:51:49.764467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.346 [2024-06-11 03:51:49.764487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.346 [2024-06-11 03:51:49.764507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.346 [2024-06-11 03:51:49.764526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.346 [2024-06-11 03:51:49.764545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.346 [2024-06-11 03:51:49.764566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.346 [2024-06-11 03:51:49.764584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.346 [2024-06-11 03:51:49.764606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.346 [2024-06-11 03:51:49.764629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.346 [2024-06-11 03:51:49.764650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.346 [2024-06-11 03:51:49.764671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.346 [2024-06-11 03:51:49.764693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.346 [2024-06-11 03:51:49.764715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.346 [2024-06-11 03:51:49.764737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.346 [2024-06-11 03:51:49.764759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.346 [2024-06-11 03:51:49.764781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.346 [2024-06-11 03:51:49.764802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.346 [2024-06-11 03:51:49.764825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.346 [2024-06-11 03:51:49.764849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.346 [2024-06-11 03:51:49.764870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.346 [2024-06-11 03:51:49.764892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.346 [2024-06-11 03:51:49.764914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.346 [2024-06-11 03:51:49.764926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.347 [2024-06-11 03:51:49.764936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.764948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.347 [2024-06-11 03:51:49.764958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.764970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.347 [2024-06-11 03:51:49.764980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.764992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.347 [2024-06-11 03:51:49.765742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.347 [2024-06-11 03:51:49.765754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.765764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.765776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.765787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.765800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.765810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.765821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.765831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.765843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.765853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.765865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.765875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.765888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.765898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.765909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.765919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.765931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.765941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.765953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.765965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.765977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.765986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.765998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.348 [2024-06-11 03:51:49.766261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.348 [2024-06-11 03:51:49.766283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.348 [2024-06-11 03:51:49.766305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.348 [2024-06-11 03:51:49.766327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.348 [2024-06-11 03:51:49.766349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.348 [2024-06-11 03:51:49.766604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.348 [2024-06-11 03:51:49.766615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.766980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.766990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.767002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.767018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.767031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:49.767041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.767052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2247d00 is same with the state(5) to be set 00:55:19.349 [2024-06-11 03:51:49.767066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.349 [2024-06-11 03:51:49.767075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.349 [2024-06-11 03:51:49.767084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45288 len:8 PRP1 0x0 PRP2 0x0 00:55:19.349 [2024-06-11 03:51:49.767094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.767146] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2247d00 was disconnected and freed. reset controller. 00:55:19.349 [2024-06-11 03:51:49.767161] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:55:19.349 [2024-06-11 03:51:49.767189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:55:19.349 [2024-06-11 03:51:49.767200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.767212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:55:19.349 [2024-06-11 03:51:49.767221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.767232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:55:19.349 [2024-06-11 03:51:49.767242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.767252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:55:19.349 [2024-06-11 03:51:49.767263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:49.767273] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:55:19.349 [2024-06-11 03:51:49.770379] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:55:19.349 [2024-06-11 03:51:49.770417] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207d040 (9): Bad file descriptor 00:55:19.349 [2024-06-11 03:51:49.845437] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:55:19.349 [2024-06-11 03:51:54.146368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:54.146410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:54.146431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:54.146441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:54.146453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:54.146462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:54.146473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:54.146483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:54.146494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:54.146504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:54.146516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:54.146526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:54.146542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:54.146552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.349 [2024-06-11 03:51:54.146564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.349 [2024-06-11 03:51:54.146573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.350 [2024-06-11 03:51:54.146592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:55:19.350 [2024-06-11 03:51:54.146612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.146984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.146996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.350 [2024-06-11 03:51:54.147431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.350 [2024-06-11 03:51:54.147442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.147981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.147994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.148004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.148021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.148032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.148044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.148054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.148066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.148077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.148089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.148099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.148110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.148120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.148132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.148142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.148153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.351 [2024-06-11 03:51:54.148163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.351 [2024-06-11 03:51:54.148175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.148978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.148990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.352 [2024-06-11 03:51:54.149000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.352 [2024-06-11 03:51:54.149016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.353 [2024-06-11 03:51:54.149028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.353 [2024-06-11 03:51:54.149039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.353 [2024-06-11 03:51:54.149050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.353 [2024-06-11 03:51:54.149062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.353 [2024-06-11 03:51:54.149074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.353 [2024-06-11 03:51:54.149086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.353 [2024-06-11 03:51:54.149097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.353 [2024-06-11 03:51:54.149109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:55:19.353 [2024-06-11 03:51:54.149118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.353 [2024-06-11 03:51:54.149147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.353 [2024-06-11 03:51:54.149158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71032 len:8 PRP1 0x0 PRP2 0x0 00:55:19.353 [2024-06-11 03:51:54.149168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.353 [2024-06-11 03:51:54.149181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.353 [2024-06-11 03:51:54.149190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.353 [2024-06-11 03:51:54.149199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71040 len:8 PRP1 0x0 PRP2 0x0 00:55:19.353 [2024-06-11 03:51:54.149208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.353 [2024-06-11 03:51:54.149218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.353 [2024-06-11 03:51:54.149226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.353 [2024-06-11 03:51:54.149235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71048 len:8 PRP1 0x0 PRP2 0x0 00:55:19.353 [2024-06-11 03:51:54.149244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.353 [2024-06-11 03:51:54.149254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.353 [2024-06-11 03:51:54.149262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.353 [2024-06-11 03:51:54.149270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71056 len:8 PRP1 0x0 PRP2 0x0 00:55:19.353 [2024-06-11 03:51:54.149281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.353 [2024-06-11 03:51:54.149291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.353 [2024-06-11 03:51:54.149299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.353 [2024-06-11 03:51:54.149309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71064 len:8 PRP1 0x0 PRP2 0x0 00:55:19.353 [2024-06-11 03:51:54.149318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.353 [2024-06-11 03:51:54.149329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:55:19.353 [2024-06-11 03:51:54.149337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:55:19.353 [2024-06-11 03:51:54.149345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71072 len:8 PRP1 0x0 PRP2 0x0 00:55:19.353 [2024-06-11 03:51:54.149356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.353 [2024-06-11 03:51:54.149404] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2247af0 was disconnected and freed. reset controller. 00:55:19.353 [2024-06-11 03:51:54.149418] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:55:19.353 [2024-06-11 03:51:54.149447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:55:19.353 [2024-06-11 03:51:54.149459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.353 [2024-06-11 03:51:54.149470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:55:19.353 [2024-06-11 03:51:54.149480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.353 [2024-06-11 03:51:54.149491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:55:19.353 [2024-06-11 03:51:54.149501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.353 [2024-06-11 03:51:54.149512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:55:19.353 [2024-06-11 03:51:54.149523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:19.353 [2024-06-11 03:51:54.149533] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:55:19.353 [2024-06-11 03:51:54.149572] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207d040 (9): Bad file descriptor 00:55:19.353 [2024-06-11 03:51:54.152618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:55:19.353 [2024-06-11 03:51:54.223306] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:55:19.353 00:55:19.353 Latency(us) 00:55:19.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:19.353 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:55:19.353 Verification LBA range: start 0x0 length 0x4000 00:55:19.353 NVMe0n1 : 15.01 11303.66 44.15 540.90 0.00 10785.18 608.55 16602.45 00:55:19.353 =================================================================================================================== 00:55:19.353 Total : 11303.66 44.15 540.90 0.00 10785.18 608.55 16602.45 00:55:19.353 Received shutdown signal, test time was about 15.000000 seconds 00:55:19.353 00:55:19.353 Latency(us) 00:55:19.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:19.353 =================================================================================================================== 00:55:19.353 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:55:19.353 03:52:00 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:55:19.353 03:52:00 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:55:19.353 03:52:00 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:55:19.353 03:52:00 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2351047 00:55:19.353 03:52:00 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:55:19.353 03:52:00 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2351047 /var/tmp/bdevperf.sock 00:55:19.353 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # '[' -z 2351047 ']' 00:55:19.353 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:55:19.353 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # local max_retries=100 00:55:19.353 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:55:19.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:55:19.353 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@839 -- # xtrace_disable 00:55:19.353 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:55:19.353 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:55:19.353 03:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@863 -- # return 0 00:55:19.353 03:52:00 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:55:19.611 [2024-06-11 03:52:00.762310] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:55:19.611 03:52:00 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:55:19.611 [2024-06-11 03:52:00.942823] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:55:19.611 03:52:00 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:55:19.868 NVMe0n1 00:55:19.868 03:52:01 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:55:20.481 00:55:20.481 03:52:01 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:55:20.481 00:55:20.481 03:52:01 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:55:20.481 03:52:01 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:55:20.749 03:52:02 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:55:21.009 03:52:02 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:55:24.287 03:52:05 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:55:24.287 03:52:05 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:55:24.287 03:52:05 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2351960 00:55:24.287 03:52:05 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:55:24.287 03:52:05 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2351960 00:55:25.220 0 00:55:25.220 03:52:06 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:55:25.220 [2024-06-11 03:52:00.414627] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:55:25.220 [2024-06-11 03:52:00.414680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2351047 ] 00:55:25.220 EAL: No free 2048 kB hugepages reported on node 1 00:55:25.220 [2024-06-11 03:52:00.476040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:25.220 [2024-06-11 03:52:00.513050] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:55:25.220 [2024-06-11 03:52:02.168185] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:55:25.220 [2024-06-11 03:52:02.168235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:55:25.220 [2024-06-11 03:52:02.168251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:25.220 [2024-06-11 03:52:02.168262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:55:25.220 [2024-06-11 03:52:02.168271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:25.220 [2024-06-11 03:52:02.168282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:55:25.220 [2024-06-11 03:52:02.168291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:25.220 [2024-06-11 03:52:02.168301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:55:25.220 [2024-06-11 03:52:02.168311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:25.220 [2024-06-11 03:52:02.168320] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:55:25.220 [2024-06-11 03:52:02.168364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:55:25.220 [2024-06-11 03:52:02.168383] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa22040 (9): Bad file descriptor 00:55:25.220 [2024-06-11 03:52:02.180795] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:55:25.220 Running I/O for 1 seconds... 00:55:25.220 00:55:25.220 Latency(us) 00:55:25.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:25.220 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:55:25.220 Verification LBA range: start 0x0 length 0x4000 00:55:25.220 NVMe0n1 : 1.00 11390.63 44.49 0.00 0.00 11197.27 1989.49 9674.36 00:55:25.220 =================================================================================================================== 00:55:25.220 Total : 11390.63 44.49 0.00 0.00 11197.27 1989.49 9674.36 00:55:25.220 03:52:06 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:55:25.220 03:52:06 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:55:25.477 03:52:06 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:55:25.477 03:52:06 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:55:25.477 03:52:06 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:55:25.734 03:52:07 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:55:25.991 03:52:07 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:55:29.269 03:52:10 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:55:29.269 03:52:10 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:55:29.269 03:52:10 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2351047 00:55:29.269 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 2351047 ']' 00:55:29.269 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 2351047 00:55:29.269 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:55:29.269 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:55:29.269 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2351047 00:55:29.269 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:55:29.269 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:55:29.269 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2351047' 00:55:29.269 killing process with pid 2351047 00:55:29.269 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 2351047 00:55:29.269 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 2351047 00:55:29.269 03:52:10 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:55:29.269 03:52:10 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:55:29.526 rmmod nvme_tcp 00:55:29.526 rmmod nvme_fabrics 00:55:29.526 rmmod nvme_keyring 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2348255 ']' 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2348255 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@949 -- # '[' -z 2348255 ']' 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # kill -0 2348255 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # uname 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2348255 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2348255' 00:55:29.526 killing process with pid 2348255 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@968 -- # kill 2348255 00:55:29.526 03:52:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@973 -- # wait 2348255 00:55:29.784 03:52:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:55:29.784 03:52:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:55:29.784 03:52:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:55:29.784 03:52:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:55:29.784 03:52:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:55:29.784 03:52:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:29.784 03:52:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:55:29.784 03:52:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:32.316 03:52:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:55:32.316 00:55:32.316 real 0m36.768s 00:55:32.316 user 1m55.250s 00:55:32.316 sys 0m7.885s 00:55:32.316 03:52:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1125 -- # xtrace_disable 00:55:32.316 03:52:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:55:32.316 ************************************ 00:55:32.316 END TEST nvmf_failover 00:55:32.316 ************************************ 00:55:32.316 03:52:13 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:55:32.316 03:52:13 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:55:32.316 03:52:13 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:55:32.316 03:52:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:55:32.316 ************************************ 00:55:32.316 START TEST nvmf_host_discovery 00:55:32.316 ************************************ 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:55:32.316 * Looking for test storage... 00:55:32.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:55:32.316 03:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:55:37.591 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:55:37.592 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:55:37.592 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:55:37.592 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:55:37.592 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:55:37.592 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:55:37.592 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:55:37.592 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:55:37.592 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:55:37.592 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:55:37.592 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:55:37.851 Found 0000:86:00.0 (0x8086 - 0x159b) 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:55:37.851 Found 0000:86:00.1 (0x8086 - 0x159b) 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:55:37.851 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:55:37.852 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:55:37.852 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:55:37.852 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:55:37.852 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:37.852 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:55:37.852 03:52:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:55:37.852 Found net devices under 0000:86:00.0: cvl_0_0 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:55:37.852 Found net devices under 0000:86:00.1: cvl_0_1 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:55:37.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:37.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:55:37.852 00:55:37.852 --- 10.0.0.2 ping statistics --- 00:55:37.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:37.852 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:55:37.852 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:55:38.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:38.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:55:38.116 00:55:38.116 --- 10.0.0.1 ping statistics --- 00:55:38.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:38.116 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2356533 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2356533 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 2356533 ']' 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:38.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:55:38.116 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.116 [2024-06-11 03:52:19.346230] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:55:38.116 [2024-06-11 03:52:19.346272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:38.116 EAL: No free 2048 kB hugepages reported on node 1 00:55:38.116 [2024-06-11 03:52:19.413916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:38.116 [2024-06-11 03:52:19.452742] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:38.116 [2024-06-11 03:52:19.452781] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:38.116 [2024-06-11 03:52:19.452788] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:38.116 [2024-06-11 03:52:19.452793] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:38.116 [2024-06-11 03:52:19.452799] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:38.116 [2024-06-11 03:52:19.452817] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:55:38.375 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:55:38.375 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:55:38.375 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:55:38.375 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:55:38.375 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.375 03:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:38.375 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:55:38.375 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.375 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.375 [2024-06-11 03:52:19.576687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:38.375 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.375 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:55:38.375 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.375 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.375 [2024-06-11 03:52:19.588852] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:55:38.375 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.375 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.376 null0 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.376 null1 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2356709 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2356709 /tmp/host.sock 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # '[' -z 2356709 ']' 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:55:38.376 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:55:38.376 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.376 [2024-06-11 03:52:19.664624] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:55:38.376 [2024-06-11 03:52:19.664662] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356709 ] 00:55:38.376 EAL: No free 2048 kB hugepages reported on node 1 00:55:38.376 [2024-06-11 03:52:19.722886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:38.376 [2024-06-11 03:52:19.762756] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@863 -- # return 0 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:55:38.635 03:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.635 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.895 [2024-06-11 03:52:20.202439] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:38.895 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == \n\v\m\e\0 ]] 00:55:39.154 03:52:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:55:39.722 [2024-06-11 03:52:20.924565] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:55:39.722 [2024-06-11 03:52:20.924584] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:55:39.722 [2024-06-11 03:52:20.924596] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:55:39.722 [2024-06-11 03:52:21.050977] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:55:39.981 [2024-06-11 03:52:21.268338] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:55:39.981 [2024-06-11 03:52:21.268358] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0 ]] 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:55:40.241 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:40.242 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:40.501 [2024-06-11 03:52:21.710536] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:55:40.501 [2024-06-11 03:52:21.710753] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:55:40.501 [2024-06-11 03:52:21.710775] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:55:40.501 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:40.502 [2024-06-11 03:52:21.840219] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:55:40.502 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:55:40.502 03:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@919 -- # sleep 1 00:55:40.761 [2024-06-11 03:52:22.104403] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:55:40.761 [2024-06-11 03:52:22.104420] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:55:40.761 [2024-06-11 03:52:22.104425] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:41.698 [2024-06-11 03:52:22.954799] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:55:41.698 [2024-06-11 03:52:22.954819] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:41.698 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:55:41.698 [2024-06-11 03:52:22.960316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:55:41.698 [2024-06-11 03:52:22.960336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:41.699 [2024-06-11 03:52:22.960347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:55:41.699 [2024-06-11 03:52:22.960355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:41.699 [2024-06-11 03:52:22.960365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:55:41.699 [2024-06-11 03:52:22.960374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:41.699 [2024-06-11 03:52:22.960384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:55:41.699 [2024-06-11 03:52:22.960393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:55:41.699 [2024-06-11 03:52:22.960403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x517b00 is same with the state(5) to be set 00:55:41.699 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:55:41.699 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:55:41.699 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:55:41.699 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:55:41.699 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:41.699 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:41.699 03:52:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:55:41.699 [2024-06-11 03:52:22.970330] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x517b00 (9): Bad file descriptor 00:55:41.699 03:52:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:41.699 [2024-06-11 03:52:22.980371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:55:41.699 [2024-06-11 03:52:22.980672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:55:41.699 [2024-06-11 03:52:22.980698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x517b00 with addr=10.0.0.2, port=4420 00:55:41.699 [2024-06-11 03:52:22.980709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x517b00 is same with the state(5) to be set 00:55:41.699 [2024-06-11 03:52:22.980723] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x517b00 (9): Bad file descriptor 00:55:41.699 [2024-06-11 03:52:22.980736] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:55:41.699 [2024-06-11 03:52:22.980745] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:55:41.699 [2024-06-11 03:52:22.980756] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:55:41.699 [2024-06-11 03:52:22.980770] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:55:41.699 [2024-06-11 03:52:22.990427] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:55:41.699 [2024-06-11 03:52:22.990598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:55:41.699 [2024-06-11 03:52:22.990612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x517b00 with addr=10.0.0.2, port=4420 00:55:41.699 [2024-06-11 03:52:22.990622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x517b00 is same with the state(5) to be set 00:55:41.699 [2024-06-11 03:52:22.990636] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x517b00 (9): Bad file descriptor 00:55:41.699 [2024-06-11 03:52:22.990653] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:55:41.699 [2024-06-11 03:52:22.990662] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:55:41.699 [2024-06-11 03:52:22.990672] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:55:41.699 [2024-06-11 03:52:22.990685] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:55:41.699 [2024-06-11 03:52:23.000478] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:55:41.699 [2024-06-11 03:52:23.000679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:55:41.699 [2024-06-11 03:52:23.000695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x517b00 with addr=10.0.0.2, port=4420 00:55:41.699 [2024-06-11 03:52:23.000706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x517b00 is same with the state(5) to be set 00:55:41.699 [2024-06-11 03:52:23.000720] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x517b00 (9): Bad file descriptor 00:55:41.699 [2024-06-11 03:52:23.000733] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:55:41.699 [2024-06-11 03:52:23.000743] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:55:41.699 [2024-06-11 03:52:23.000752] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:55:41.699 [2024-06-11 03:52:23.000765] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:55:41.699 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:55:41.699 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:41.699 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:55:41.699 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:55:41.699 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:41.699 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:41.699 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:55:41.699 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:55:41.699 [2024-06-11 03:52:23.010534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:55:41.699 [2024-06-11 03:52:23.010720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:55:41.699 [2024-06-11 03:52:23.010736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x517b00 with addr=10.0.0.2, port=4420 00:55:41.699 [2024-06-11 03:52:23.010746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x517b00 is same with the state(5) to be set 00:55:41.699 [2024-06-11 03:52:23.010761] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x517b00 (9): Bad file descriptor 00:55:41.699 [2024-06-11 03:52:23.010773] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:55:41.699 [2024-06-11 03:52:23.010783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:55:41.699 [2024-06-11 03:52:23.010793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:55:41.699 [2024-06-11 03:52:23.010806] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:55:41.699 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:41.699 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:55:41.699 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:41.699 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:55:41.699 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:41.699 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:55:41.699 [2024-06-11 03:52:23.020590] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:55:41.699 [2024-06-11 03:52:23.020850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:55:41.699 [2024-06-11 03:52:23.020865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x517b00 with addr=10.0.0.2, port=4420 00:55:41.699 [2024-06-11 03:52:23.020875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x517b00 is same with the state(5) to be set 00:55:41.699 [2024-06-11 03:52:23.020889] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x517b00 (9): Bad file descriptor 00:55:41.699 [2024-06-11 03:52:23.020902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:55:41.699 [2024-06-11 03:52:23.020912] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:55:41.699 [2024-06-11 03:52:23.020921] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:55:41.699 [2024-06-11 03:52:23.020934] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:55:41.699 [2024-06-11 03:52:23.030647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:55:41.699 [2024-06-11 03:52:23.030853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:55:41.699 [2024-06-11 03:52:23.030867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x517b00 with addr=10.0.0.2, port=4420 00:55:41.699 [2024-06-11 03:52:23.030877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x517b00 is same with the state(5) to be set 00:55:41.699 [2024-06-11 03:52:23.030892] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x517b00 (9): Bad file descriptor 00:55:41.699 [2024-06-11 03:52:23.030904] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:55:41.699 [2024-06-11 03:52:23.030914] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:55:41.699 [2024-06-11 03:52:23.030924] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:55:41.699 [2024-06-11 03:52:23.030937] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:55:41.699 [2024-06-11 03:52:23.040698] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:55:41.699 [2024-06-11 03:52:23.040884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:55:41.699 [2024-06-11 03:52:23.040898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x517b00 with addr=10.0.0.2, port=4420 00:55:41.699 [2024-06-11 03:52:23.040908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x517b00 is same with the state(5) to be set 00:55:41.699 [2024-06-11 03:52:23.040922] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x517b00 (9): Bad file descriptor 00:55:41.699 [2024-06-11 03:52:23.040935] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:55:41.699 [2024-06-11 03:52:23.040946] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:55:41.699 [2024-06-11 03:52:23.040955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:55:41.699 [2024-06-11 03:52:23.040967] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:55:41.699 [2024-06-11 03:52:23.042077] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:55:41.699 [2024-06-11 03:52:23.042096] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:55:41.700 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:41.700 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:55:41.700 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:41.700 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:55:41.700 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:55:41.700 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:41.700 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:41.700 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:55:41.700 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_paths nvme0 00:55:41.700 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:55:41.700 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:41.700 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:41.700 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:55:41.700 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:55:41.700 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:55:41.700 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ 4421 == \4\4\2\1 ]] 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_subsystem_names 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_bdev_list 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # [[ '' == '' ]] 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:55:41.959 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local max=10 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( max-- )) 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # get_notification_count 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( notification_count == expected_count )) 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # return 0 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:41.960 03:52:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:43.339 [2024-06-11 03:52:24.364151] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:55:43.339 [2024-06-11 03:52:24.364169] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:55:43.339 [2024-06-11 03:52:24.364181] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:55:43.339 [2024-06-11 03:52:24.452446] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:55:43.339 [2024-06-11 03:52:24.517773] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:55:43.339 [2024-06-11 03:52:24.517801] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:43.339 request: 00:55:43.339 { 00:55:43.339 "name": "nvme", 00:55:43.339 "trtype": "tcp", 00:55:43.339 "traddr": "10.0.0.2", 00:55:43.339 "hostnqn": "nqn.2021-12.io.spdk:test", 00:55:43.339 "adrfam": "ipv4", 00:55:43.339 "trsvcid": "8009", 00:55:43.339 "wait_for_attach": true, 00:55:43.339 "method": "bdev_nvme_start_discovery", 00:55:43.339 "req_id": 1 00:55:43.339 } 00:55:43.339 Got JSON-RPC error response 00:55:43.339 response: 00:55:43.339 { 00:55:43.339 "code": -17, 00:55:43.339 "message": "File exists" 00:55:43.339 } 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:43.339 request: 00:55:43.339 { 00:55:43.339 "name": "nvme_second", 00:55:43.339 "trtype": "tcp", 00:55:43.339 "traddr": "10.0.0.2", 00:55:43.339 "hostnqn": "nqn.2021-12.io.spdk:test", 00:55:43.339 "adrfam": "ipv4", 00:55:43.339 "trsvcid": "8009", 00:55:43.339 "wait_for_attach": true, 00:55:43.339 "method": "bdev_nvme_start_discovery", 00:55:43.339 "req_id": 1 00:55:43.339 } 00:55:43.339 Got JSON-RPC error response 00:55:43.339 response: 00:55:43.339 { 00:55:43.339 "code": -17, 00:55:43.339 "message": "File exists" 00:55:43.339 } 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:55:43.339 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:43.598 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:55:43.598 03:52:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:55:43.598 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:55:43.598 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:55:43.598 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:55:43.598 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:55:43.598 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:55:43.598 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:55:43.598 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:55:43.598 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:43.598 03:52:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:44.532 [2024-06-11 03:52:25.771119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:55:44.532 [2024-06-11 03:52:25.771145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x54fe10 with addr=10.0.0.2, port=8010 00:55:44.532 [2024-06-11 03:52:25.771161] nvme_tcp.c:2706:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:55:44.532 [2024-06-11 03:52:25.771169] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:55:44.533 [2024-06-11 03:52:25.771177] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:55:45.467 [2024-06-11 03:52:26.773568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:55:45.467 [2024-06-11 03:52:26.773593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x515ca0 with addr=10.0.0.2, port=8010 00:55:45.467 [2024-06-11 03:52:26.773607] nvme_tcp.c:2706:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:55:45.467 [2024-06-11 03:52:26.773615] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:55:45.467 [2024-06-11 03:52:26.773623] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:55:46.404 [2024-06-11 03:52:27.775719] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:55:46.404 request: 00:55:46.404 { 00:55:46.404 "name": "nvme_second", 00:55:46.404 "trtype": "tcp", 00:55:46.404 "traddr": "10.0.0.2", 00:55:46.404 "hostnqn": "nqn.2021-12.io.spdk:test", 00:55:46.404 "adrfam": "ipv4", 00:55:46.404 "trsvcid": "8010", 00:55:46.404 "attach_timeout_ms": 3000, 00:55:46.404 "method": "bdev_nvme_start_discovery", 00:55:46.404 "req_id": 1 00:55:46.404 } 00:55:46.404 Got JSON-RPC error response 00:55:46.404 response: 00:55:46.404 { 00:55:46.404 "code": -110, 00:55:46.404 "message": "Connection timed out" 00:55:46.404 } 00:55:46.404 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:55:46.404 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:55:46.404 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:55:46.404 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:55:46.404 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:55:46.404 03:52:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:55:46.404 03:52:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:55:46.404 03:52:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:55:46.404 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:55:46.404 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:46.404 03:52:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:55:46.404 03:52:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:55:46.404 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2356709 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:55:46.723 rmmod nvme_tcp 00:55:46.723 rmmod nvme_fabrics 00:55:46.723 rmmod nvme_keyring 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2356533 ']' 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2356533 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@949 -- # '[' -z 2356533 ']' 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # kill -0 2356533 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # uname 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2356533 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2356533' 00:55:46.723 killing process with pid 2356533 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@968 -- # kill 2356533 00:55:46.723 03:52:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@973 -- # wait 2356533 00:55:46.982 03:52:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:55:46.982 03:52:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:55:46.982 03:52:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:55:46.982 03:52:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:55:46.982 03:52:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:55:46.982 03:52:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:46.982 03:52:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:55:46.982 03:52:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:48.886 03:52:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:55:48.886 00:55:48.886 real 0m16.977s 00:55:48.886 user 0m20.121s 00:55:48.886 sys 0m5.722s 00:55:48.886 03:52:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:55:48.886 03:52:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:55:48.886 ************************************ 00:55:48.886 END TEST nvmf_host_discovery 00:55:48.886 ************************************ 00:55:48.886 03:52:30 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:55:48.886 03:52:30 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:55:48.886 03:52:30 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:55:48.886 03:52:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:55:48.886 ************************************ 00:55:48.886 START TEST nvmf_host_multipath_status 00:55:48.886 ************************************ 00:55:48.886 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:55:49.145 * Looking for test storage... 00:55:49.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:55:49.145 03:52:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:55:55.716 Found 0000:86:00.0 (0x8086 - 0x159b) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:55:55.716 Found 0000:86:00.1 (0x8086 - 0x159b) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:55:55.716 Found net devices under 0000:86:00.0: cvl_0_0 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:55:55.716 Found net devices under 0000:86:00.1: cvl_0_1 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:55:55.716 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:55:55.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:55.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:55:55.717 00:55:55.717 --- 10.0.0.2 ping statistics --- 00:55:55.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:55.717 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:55:55.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:55.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:55:55.717 00:55:55.717 --- 10.0.0.1 ping statistics --- 00:55:55.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:55.717 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@723 -- # xtrace_disable 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2361868 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2361868 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 2361868 ']' 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:55.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:55:55.717 [2024-06-11 03:52:36.548125] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:55:55.717 [2024-06-11 03:52:36.548167] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:55.717 EAL: No free 2048 kB hugepages reported on node 1 00:55:55.717 [2024-06-11 03:52:36.610042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:55:55.717 [2024-06-11 03:52:36.651173] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:55.717 [2024-06-11 03:52:36.651213] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:55.717 [2024-06-11 03:52:36.651222] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:55.717 [2024-06-11 03:52:36.651229] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:55.717 [2024-06-11 03:52:36.651235] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:55.717 [2024-06-11 03:52:36.651275] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:55:55.717 [2024-06-11 03:52:36.651279] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@729 -- # xtrace_disable 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2361868 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:55:55.717 [2024-06-11 03:52:36.932028] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:55.717 03:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:55:55.976 Malloc0 00:55:55.976 03:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:55:55.976 03:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:55:56.236 03:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:55:56.236 [2024-06-11 03:52:37.633329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:56.495 03:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:55:56.495 [2024-06-11 03:52:37.789836] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:55:56.495 03:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2362102 00:55:56.495 03:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:55:56.495 03:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:55:56.495 03:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2362102 /var/tmp/bdevperf.sock 00:55:56.495 03:52:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # '[' -z 2362102 ']' 00:55:56.495 03:52:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:55:56.496 03:52:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local max_retries=100 00:55:56.496 03:52:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:55:56.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:55:56.496 03:52:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # xtrace_disable 00:55:56.496 03:52:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:55:56.755 03:52:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:55:56.755 03:52:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@863 -- # return 0 00:55:56.755 03:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:55:57.015 03:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:55:57.275 Nvme0n1 00:55:57.275 03:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:55:57.534 Nvme0n1 00:55:57.534 03:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:55:57.534 03:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:56:00.067 03:52:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:56:00.067 03:52:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:56:00.067 03:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:56:00.067 03:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:56:01.001 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:56:01.001 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:56:01.001 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:01.001 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:56:01.259 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:01.259 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:56:01.259 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:01.259 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:56:01.259 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:56:01.259 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:56:01.259 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:01.259 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:56:01.517 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:01.517 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:56:01.517 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:01.517 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:56:01.775 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:01.775 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:56:01.775 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:01.775 03:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:56:01.775 03:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:01.775 03:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:56:02.034 03:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:02.034 03:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:56:02.034 03:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:02.034 03:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:56:02.034 03:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:56:02.293 03:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:56:02.552 03:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:56:03.489 03:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:56:03.489 03:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:56:03.489 03:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:03.489 03:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:56:03.748 03:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:56:03.748 03:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:56:03.748 03:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:03.748 03:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:56:03.748 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:03.748 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:56:03.748 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:56:03.748 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:04.006 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:04.006 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:56:04.007 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:04.007 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:56:04.265 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:04.265 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:56:04.265 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:56:04.265 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:04.265 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:04.265 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:56:04.265 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:04.265 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:56:04.525 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:04.525 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:56:04.525 03:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:56:04.783 03:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:56:05.042 03:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:56:05.979 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:56:05.979 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:56:05.979 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:05.979 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:56:06.238 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:06.239 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:56:06.239 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:06.239 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:56:06.239 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:56:06.239 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:56:06.239 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:56:06.239 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:06.498 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:06.498 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:56:06.498 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:06.498 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:56:06.757 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:06.757 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:56:06.757 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:06.757 03:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:56:06.757 03:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:06.757 03:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:56:06.757 03:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:56:06.757 03:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:07.016 03:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:07.016 03:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:56:07.016 03:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:56:07.275 03:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:56:07.533 03:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:56:08.519 03:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:56:08.520 03:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:56:08.520 03:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:08.520 03:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:56:08.520 03:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:08.520 03:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:56:08.520 03:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:08.520 03:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:56:08.779 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:56:08.779 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:56:08.779 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:08.779 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:56:09.038 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:09.038 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:56:09.038 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:09.038 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:56:09.296 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:09.296 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:56:09.296 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:56:09.296 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:09.296 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:09.297 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:56:09.297 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:09.297 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:56:09.555 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:56:09.555 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:56:09.555 03:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:56:09.814 03:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:56:09.814 03:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:56:11.191 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:56:11.191 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:56:11.191 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:11.191 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:56:11.191 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:56:11.191 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:56:11.191 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:11.191 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:56:11.191 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:56:11.191 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:56:11.191 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:11.191 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:56:11.449 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:11.449 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:56:11.449 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:11.449 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:56:11.706 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:11.706 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:56:11.706 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:11.706 03:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:56:11.706 03:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:56:11.706 03:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:56:11.706 03:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:11.706 03:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:56:11.964 03:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:56:11.964 03:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:56:11.964 03:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:56:12.222 03:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:56:12.222 03:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:56:13.595 03:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:56:13.595 03:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:56:13.595 03:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:13.595 03:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:56:13.595 03:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:56:13.595 03:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:56:13.595 03:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:13.595 03:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:56:13.595 03:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:13.595 03:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:56:13.595 03:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:13.595 03:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:56:13.852 03:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:13.852 03:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:56:13.852 03:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:13.852 03:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:56:14.111 03:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:14.111 03:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:56:14.111 03:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:14.111 03:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:56:14.369 03:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:56:14.369 03:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:56:14.369 03:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:14.369 03:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:56:14.369 03:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:14.369 03:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:56:14.627 03:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:56:14.627 03:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:56:14.886 03:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:56:15.145 03:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:56:16.081 03:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:56:16.081 03:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:56:16.081 03:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:16.081 03:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:56:16.340 03:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:16.340 03:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:56:16.340 03:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:16.340 03:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:56:16.340 03:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:16.340 03:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:56:16.340 03:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:56:16.340 03:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:16.598 03:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:16.598 03:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:56:16.598 03:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:16.598 03:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:56:16.856 03:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:16.856 03:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:56:16.856 03:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:16.856 03:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:56:16.856 03:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:16.856 03:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:56:16.856 03:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:16.856 03:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:56:17.115 03:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:17.115 03:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:56:17.115 03:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:56:17.373 03:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:56:17.373 03:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:56:18.747 03:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:56:18.747 03:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:56:18.747 03:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:18.747 03:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:56:18.747 03:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:56:18.748 03:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:56:18.748 03:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:18.748 03:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:56:18.748 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:18.748 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:56:18.748 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:18.748 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:56:19.006 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:19.006 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:56:19.006 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:56:19.006 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:19.264 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:19.264 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:56:19.264 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:19.264 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:56:19.523 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:19.523 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:56:19.523 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:19.523 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:56:19.523 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:19.523 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:56:19.523 03:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:56:19.781 03:53:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:56:20.039 03:53:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:56:20.975 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:56:20.975 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:56:20.975 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:20.975 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:56:21.232 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:21.232 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:56:21.232 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:21.232 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:56:21.232 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:21.232 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:56:21.232 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:21.232 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:56:21.490 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:21.490 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:56:21.490 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:21.490 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:56:21.748 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:21.748 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:56:21.748 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:21.748 03:53:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:56:22.006 03:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:22.006 03:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:56:22.006 03:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:22.006 03:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:56:22.006 03:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:22.006 03:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:56:22.006 03:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:56:22.265 03:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:56:22.523 03:53:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:56:23.459 03:53:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:56:23.459 03:53:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:56:23.459 03:53:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:23.459 03:53:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:56:23.718 03:53:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:23.718 03:53:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:56:23.718 03:53:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:23.718 03:53:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:56:23.718 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:56:23.718 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:56:23.976 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:23.976 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:56:23.977 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:23.977 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:56:23.977 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:23.977 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:56:24.235 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:24.235 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:56:24.235 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:24.235 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:56:24.494 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:56:24.494 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:56:24.494 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:56:24.494 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:56:24.494 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:56:24.494 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2362102 00:56:24.494 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 2362102 ']' 00:56:24.494 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 2362102 00:56:24.494 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:56:24.494 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:56:24.494 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2362102 00:56:24.778 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:56:24.779 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:56:24.779 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2362102' 00:56:24.779 killing process with pid 2362102 00:56:24.779 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 2362102 00:56:24.779 03:53:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 2362102 00:56:24.779 Connection closed with partial response: 00:56:24.779 00:56:24.779 00:56:24.779 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2362102 00:56:24.779 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:56:24.779 [2024-06-11 03:52:37.845287] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:56:24.779 [2024-06-11 03:52:37.845343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2362102 ] 00:56:24.779 EAL: No free 2048 kB hugepages reported on node 1 00:56:24.779 [2024-06-11 03:52:37.900592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:24.779 [2024-06-11 03:52:37.941407] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:56:24.779 Running I/O for 90 seconds... 00:56:24.779 [2024-06-11 03:52:51.007629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.007987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.007999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.008006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.008026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.008033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.008045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.008052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.008064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.008071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.008083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.008089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.008101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.008108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.008121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.008128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.008140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.008147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.008158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.008165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.008177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.008184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.008196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.008202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.008214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.008221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.008232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.008239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.008251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.008257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.008269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.008277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.008289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.779 [2024-06-11 03:52:51.008296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:56:24.779 [2024-06-11 03:52:51.008308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.008742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.008749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.009311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.009325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.009341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.009348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.009361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.009374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.009387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.009394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.009407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.009414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.009427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.009435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.009448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.009455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.009468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.009475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.009488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.009495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.009508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.009515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.009528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.009534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.009547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.009554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.009566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.009574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.009586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.009593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.009605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.780 [2024-06-11 03:52:51.009613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:56:24.780 [2024-06-11 03:52:51.009627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.781 [2024-06-11 03:52:51.009634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.009985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.009992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.010017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.010037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.010057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.010076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.010096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.010118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.010138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.010157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.010177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.010196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.010216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.010238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.010258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.781 [2024-06-11 03:52:51.010688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.781 [2024-06-11 03:52:51.010709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.781 [2024-06-11 03:52:51.010728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.781 [2024-06-11 03:52:51.010747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.781 [2024-06-11 03:52:51.010768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.781 [2024-06-11 03:52:51.010788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.781 [2024-06-11 03:52:51.010806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:56:24.781 [2024-06-11 03:52:51.010818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.782 [2024-06-11 03:52:51.010825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.010837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.782 [2024-06-11 03:52:51.010844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.010856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.782 [2024-06-11 03:52:51.010863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.010875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.782 [2024-06-11 03:52:51.010882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.010894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.782 [2024-06-11 03:52:51.010901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.010913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.782 [2024-06-11 03:52:51.010919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.010931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.782 [2024-06-11 03:52:51.010938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.010950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.782 [2024-06-11 03:52:51.010957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.010969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.782 [2024-06-11 03:52:51.010975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.010987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.782 [2024-06-11 03:52:51.010994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.782 [2024-06-11 03:52:51.011554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:56:24.782 [2024-06-11 03:52:51.011566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.011573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.011585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.011592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.011987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.011999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:56:24.783 [2024-06-11 03:52:51.012550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.783 [2024-06-11 03:52:51.012556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.012570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.012577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.012589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.012595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.012876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.012885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.012899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.012906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.012918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.012925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.012936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.012943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.012955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.012961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.012973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.012980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.012992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.012998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.784 [2024-06-11 03:52:51.013218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.784 [2024-06-11 03:52:51.013823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:56:24.784 [2024-06-11 03:52:51.013835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.013842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.013855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.013862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.013874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.013881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.013894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.013900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.013915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.013922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.014099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.014121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.014141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.014161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.014181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.014200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.014222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.014243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.785 [2024-06-11 03:52:51.014275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.785 [2024-06-11 03:52:51.014294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.785 [2024-06-11 03:52:51.014312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.785 [2024-06-11 03:52:51.014331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.785 [2024-06-11 03:52:51.014350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.785 [2024-06-11 03:52:51.014368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.785 [2024-06-11 03:52:51.014387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.785 [2024-06-11 03:52:51.014407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.785 [2024-06-11 03:52:51.014426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.785 [2024-06-11 03:52:51.014444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.785 [2024-06-11 03:52:51.014467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.785 [2024-06-11 03:52:51.014485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.785 [2024-06-11 03:52:51.014504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.785 [2024-06-11 03:52:51.014523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.785 [2024-06-11 03:52:51.014541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.014553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.785 [2024-06-11 03:52:51.014560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.015879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.015889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.015902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.015909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.015922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.015928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.015940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.015947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.015959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.015966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.015979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.785 [2024-06-11 03:52:51.015985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:56:24.785 [2024-06-11 03:52:51.015997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.786 [2024-06-11 03:52:51.016878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:56:24.786 [2024-06-11 03:52:51.016891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.016897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.016909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.016916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.016928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.016934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.016947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.016953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.016965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.016972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.016984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.016990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.017002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.017013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.017026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.017032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.017044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.017051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.017063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.017070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.017082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.017090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.017102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.017109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.017120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.017127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.787 [2024-06-11 03:52:51.022868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.787 [2024-06-11 03:52:51.022926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:56:24.787 [2024-06-11 03:52:51.022938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.022945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.022957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.022963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.022975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.022981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.022993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.788 [2024-06-11 03:52:51.023470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.788 [2024-06-11 03:52:51.023488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.788 [2024-06-11 03:52:51.023506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.788 [2024-06-11 03:52:51.023525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.788 [2024-06-11 03:52:51.023544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.788 [2024-06-11 03:52:51.023562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:56:24.788 [2024-06-11 03:52:51.023575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.789 [2024-06-11 03:52:51.023582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.789 [2024-06-11 03:52:51.023600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.789 [2024-06-11 03:52:51.023619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.789 [2024-06-11 03:52:51.023637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.789 [2024-06-11 03:52:51.023655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.789 [2024-06-11 03:52:51.023674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.789 [2024-06-11 03:52:51.023692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.789 [2024-06-11 03:52:51.023710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.789 [2024-06-11 03:52:51.023729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.789 [2024-06-11 03:52:51.023747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.789 [2024-06-11 03:52:51.023766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.023784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.023804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.023822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.023841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.023859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.023878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.023896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.023915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.023933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.023951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.023969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.023988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.023999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.024006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.024022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.024030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.024042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.024049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.024061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.024068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.024706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.024719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.024732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.024739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.024751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.024758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.024770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.024776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.024788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.024795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.024807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.024813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.024825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.024833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.024845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.024851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.024863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.024870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.024882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.024891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.024903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.024909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.024921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.789 [2024-06-11 03:52:51.024928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:56:24.789 [2024-06-11 03:52:51.024940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.024946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.024958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.024965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.024977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.024983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.024995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.025587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.025990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.026000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.026020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.026027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.026040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.026046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.026058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.026064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:56:24.790 [2024-06-11 03:52:51.026076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.790 [2024-06-11 03:52:51.026083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.791 [2024-06-11 03:52:51.026305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.026737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.026745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.027105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.027117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.027131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.027138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.027151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.027158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:56:24.791 [2024-06-11 03:52:51.027171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.791 [2024-06-11 03:52:51.027178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.027196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.027215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.027235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.027254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.027275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.792 [2024-06-11 03:52:51.027294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.792 [2024-06-11 03:52:51.027313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.792 [2024-06-11 03:52:51.027332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.792 [2024-06-11 03:52:51.027351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.792 [2024-06-11 03:52:51.027370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.792 [2024-06-11 03:52:51.027388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.792 [2024-06-11 03:52:51.027409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.792 [2024-06-11 03:52:51.027428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.792 [2024-06-11 03:52:51.027447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.792 [2024-06-11 03:52:51.027467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.792 [2024-06-11 03:52:51.027486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.792 [2024-06-11 03:52:51.027506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.792 [2024-06-11 03:52:51.027525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.792 [2024-06-11 03:52:51.027544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.792 [2024-06-11 03:52:51.027563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.792 [2024-06-11 03:52:51.027582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.027601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.027619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.027638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.027657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.027675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.027694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.027706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.027713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.028699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.028708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.028724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.028731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.028743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.028749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.028762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.028768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.028780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.028787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:56:24.792 [2024-06-11 03:52:51.028799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.792 [2024-06-11 03:52:51.028805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.028817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.028824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.028835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.028842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.028854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.028860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.028872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.028879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.028891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.028897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.028909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.028916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.028928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.028934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.028947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.028954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.028966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.028973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.028985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.028992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:56:24.793 [2024-06-11 03:52:51.029680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.793 [2024-06-11 03:52:51.029687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.029699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.029705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.029717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.029724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.029735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.029742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.029754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.029760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.029772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.029779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.029792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.029798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.029810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.029816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.034738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.034747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.034760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.034766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.034779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.034786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.034798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.034804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.034816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.034823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.034834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.034841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.034853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.034859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.034872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.034878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.034890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.034896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.034908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.034915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.034928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.034935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.034947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.034954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.034966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.034972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.034984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.034990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.794 [2024-06-11 03:52:51.035136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.794 [2024-06-11 03:52:51.035865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:56:24.794 [2024-06-11 03:52:51.035877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.035884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.035896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.035902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.035914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.035922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.035934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.035940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.035952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.035959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.035971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.035977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.035989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.035995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.036022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.036040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.036059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.036077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.036096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.036115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.036133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.036151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.036174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.036192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.036211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.036230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.036249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.036268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.795 [2024-06-11 03:52:51.036287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.795 [2024-06-11 03:52:51.036306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.795 [2024-06-11 03:52:51.036325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.795 [2024-06-11 03:52:51.036344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.795 [2024-06-11 03:52:51.036363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.795 [2024-06-11 03:52:51.036381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.795 [2024-06-11 03:52:51.036402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.795 [2024-06-11 03:52:51.036421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.795 [2024-06-11 03:52:51.036440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.795 [2024-06-11 03:52:51.036459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.795 [2024-06-11 03:52:51.036478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.795 [2024-06-11 03:52:51.036497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.795 [2024-06-11 03:52:51.036515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.795 [2024-06-11 03:52:51.036534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.795 [2024-06-11 03:52:51.036553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.795 [2024-06-11 03:52:51.036571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.036589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.795 [2024-06-11 03:52:51.036608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:56:24.795 [2024-06-11 03:52:51.036620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.036980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.036994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.037001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.037577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.037590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.037605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.037611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.037623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.037630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.037642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.037649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.037663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.037670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.037682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.037688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.037700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.037707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.037719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.037726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.037738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.037744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.037756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.037763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.037775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.037781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.037793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.037800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.037812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.796 [2024-06-11 03:52:51.037818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:56:24.796 [2024-06-11 03:52:51.037830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.037837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.037849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.037855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.037867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.037874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.037887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.037894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.037906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.037912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.037924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.037931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.037943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.037949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.037961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.037968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.037980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.037987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.037999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:56:24.797 [2024-06-11 03:52:51.038551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.797 [2024-06-11 03:52:51.038560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.038572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.038579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.038592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.038599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.038610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.038617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.038629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.038635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.038648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.798 [2024-06-11 03:52:51.038654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.038666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.038673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.038685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.038691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.038703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.038709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.038721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.038728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.038740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.038746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.038758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.038764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.038777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.038783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.798 [2024-06-11 03:52:51.039786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.798 [2024-06-11 03:52:51.039806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.798 [2024-06-11 03:52:51.039825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:24.798 [2024-06-11 03:52:51.039838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.799 [2024-06-11 03:52:51.039844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.039856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.799 [2024-06-11 03:52:51.039863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.039877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.799 [2024-06-11 03:52:51.039884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.039896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.799 [2024-06-11 03:52:51.039905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.039918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.799 [2024-06-11 03:52:51.039926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.039939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.799 [2024-06-11 03:52:51.039946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.039959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.799 [2024-06-11 03:52:51.039965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.039977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.799 [2024-06-11 03:52:51.039984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.039996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.799 [2024-06-11 03:52:51.040002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.040020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.799 [2024-06-11 03:52:51.040027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.040039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.799 [2024-06-11 03:52:51.040045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.040059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.799 [2024-06-11 03:52:51.040066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.040078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.799 [2024-06-11 03:52:51.040084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.040096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.799 [2024-06-11 03:52:51.040103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.040115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.040122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.040134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.040141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.040152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.040159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.040171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.040177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.040189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.040196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.040208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.040216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.040229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.040236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.799 [2024-06-11 03:52:51.041578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:56:24.799 [2024-06-11 03:52:51.041590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.041989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.041995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.042007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.042019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.042031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.042037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.042049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.042055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.042067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.047136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.047150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.047158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.047170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.047177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.047189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.047196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.047208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.047214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.047226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.047233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.047245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.047252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.047264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.047270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.047284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.047291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.047303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.047309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.047321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.047327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.047339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.047346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.047358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.047364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.047377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.047383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:56:24.800 [2024-06-11 03:52:51.047395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.800 [2024-06-11 03:52:51.047401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.047413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.047420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.047432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.047438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.047450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.047457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.047468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.047475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.047487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.047493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.047506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.047513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.047526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.047532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.047544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.047551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.047562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.047569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.047581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.047587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.047599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.047605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.047617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.047624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.047636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.047642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.047654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.047660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.047673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.801 [2024-06-11 03:52:51.047679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.801 [2024-06-11 03:52:51.048516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:56:24.801 [2024-06-11 03:52:51.048528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.048534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.048552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.048571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.048590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.048609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.048627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.048646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.048665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.048683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.048704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.048722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.048741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.048759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.048777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.802 [2024-06-11 03:52:51.048796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.802 [2024-06-11 03:52:51.048815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.802 [2024-06-11 03:52:51.048833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.802 [2024-06-11 03:52:51.048853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.802 [2024-06-11 03:52:51.048871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.802 [2024-06-11 03:52:51.048890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.802 [2024-06-11 03:52:51.048909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.802 [2024-06-11 03:52:51.048929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.802 [2024-06-11 03:52:51.048948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.802 [2024-06-11 03:52:51.048967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.802 [2024-06-11 03:52:51.048986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.048998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.802 [2024-06-11 03:52:51.049004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.049032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.802 [2024-06-11 03:52:51.049039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.049051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.802 [2024-06-11 03:52:51.049058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.049070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.802 [2024-06-11 03:52:51.049076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.049088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.802 [2024-06-11 03:52:51.049095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.049106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.049113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.049125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.049131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.049144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.049151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.049163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.049171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.049183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.049190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.049202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.049209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.049221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.049227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.049239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.049246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.049257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.802 [2024-06-11 03:52:51.049264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:56:24.802 [2024-06-11 03:52:51.049276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.049283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.049295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.049301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.049313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.049320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.049331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.049338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.049350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.049357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.049369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.049375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.049387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.049394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.049407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.049414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.049426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.049433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.049445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.049451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.049463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.049470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.049482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.049489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.049500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.049507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.049520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.049526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.803 [2024-06-11 03:52:51.050570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:56:24.803 [2024-06-11 03:52:51.050583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.050982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.050994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.804 [2024-06-11 03:52:51.051643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:56:24.804 [2024-06-11 03:52:51.051789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.804 [2024-06-11 03:52:51.051796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.051808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.051814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.051826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.051837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.051850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.051856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.051868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.051874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.051886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.051893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.051905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.051911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.051923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.051930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.051942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.051949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.051960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.051967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.051979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.051986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.051998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.052004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.052028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.052047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.052068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.052086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.052105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.052123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.052141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.052160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.052178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.052199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.052217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.052236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.805 [2024-06-11 03:52:51.052533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.805 [2024-06-11 03:52:51.052553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.805 [2024-06-11 03:52:51.052572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.805 [2024-06-11 03:52:51.052593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.805 [2024-06-11 03:52:51.052611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.805 [2024-06-11 03:52:51.052630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.805 [2024-06-11 03:52:51.052650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.805 [2024-06-11 03:52:51.052668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.805 [2024-06-11 03:52:51.052687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.805 [2024-06-11 03:52:51.052706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:56:24.805 [2024-06-11 03:52:51.052718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.806 [2024-06-11 03:52:51.052725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.052737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.806 [2024-06-11 03:52:51.052743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.052756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.806 [2024-06-11 03:52:51.052762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.052775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.806 [2024-06-11 03:52:51.052781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.052793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.806 [2024-06-11 03:52:51.052800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.052813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.806 [2024-06-11 03:52:51.052820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.052831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.806 [2024-06-11 03:52:51.052838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.052850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.052856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.052868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.052875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.052887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.052893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.052906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.052912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.052924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.052930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.052944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.052950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.052963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.052969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.053936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.053943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.054110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.054120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.054133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.054139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.054151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.054158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.054170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.054176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.054188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.054195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.054207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.054213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.054225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.054231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.054243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.806 [2024-06-11 03:52:51.054250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:56:24.806 [2024-06-11 03:52:51.054262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.054269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.054280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.054287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.054299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.054305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.054319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.054325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.054337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.054344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.054356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.054362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.054374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.054380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.807 [2024-06-11 03:52:51.059927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:56:24.807 [2024-06-11 03:52:51.059939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.059947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.059959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.059965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.059977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.059984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.059996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.808 [2024-06-11 03:52:51.060099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.060988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.060994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.061006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.061018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.061030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.061036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.061048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.061055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.061067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.061073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.061085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.061092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.061105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.061111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.061123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.061130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.061142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.061150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.061162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.808 [2024-06-11 03:52:51.061168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:56:24.808 [2024-06-11 03:52:51.061180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.809 [2024-06-11 03:52:51.061224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.809 [2024-06-11 03:52:51.061242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.809 [2024-06-11 03:52:51.061261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.809 [2024-06-11 03:52:51.061279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.809 [2024-06-11 03:52:51.061298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.809 [2024-06-11 03:52:51.061316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.809 [2024-06-11 03:52:51.061335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.809 [2024-06-11 03:52:51.061354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.809 [2024-06-11 03:52:51.061374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.809 [2024-06-11 03:52:51.061392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.809 [2024-06-11 03:52:51.061410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.809 [2024-06-11 03:52:51.061429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.809 [2024-06-11 03:52:51.061448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.809 [2024-06-11 03:52:51.061466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.809 [2024-06-11 03:52:51.061484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.809 [2024-06-11 03:52:51.061503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.809 [2024-06-11 03:52:51.061895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:56:24.809 [2024-06-11 03:52:51.061907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.061913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.061925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.061932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.062984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.062990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.063002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.063015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.063027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.063034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.063046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.063052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.063064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.063070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.063083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.063091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:56:24.810 [2024-06-11 03:52:51.063103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.810 [2024-06-11 03:52:51.063109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.063547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.063554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.064032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.811 [2024-06-11 03:52:51.064044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.064057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.064064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.064077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.064084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.064096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.064102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.064114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.064121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.064133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.064139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.064151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.064158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.064170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.064176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.064189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.064196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.064207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.064214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.064226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.064233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.064245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.064254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.064266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.064273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.064285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.064292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.064304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.811 [2024-06-11 03:52:51.064310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:56:24.811 [2024-06-11 03:52:51.064323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.064934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.812 [2024-06-11 03:52:51.064954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.812 [2024-06-11 03:52:51.064973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.064985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.812 [2024-06-11 03:52:51.064992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.812 [2024-06-11 03:52:51.065019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.812 [2024-06-11 03:52:51.065040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.812 [2024-06-11 03:52:51.065058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.812 [2024-06-11 03:52:51.065077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.812 [2024-06-11 03:52:51.065095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.812 [2024-06-11 03:52:51.065114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.812 [2024-06-11 03:52:51.065133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.812 [2024-06-11 03:52:51.065152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.812 [2024-06-11 03:52:51.065170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.812 [2024-06-11 03:52:51.065189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.812 [2024-06-11 03:52:51.065208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.812 [2024-06-11 03:52:51.065226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.812 [2024-06-11 03:52:51.065247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.065268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.065286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.065305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.065324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.812 [2024-06-11 03:52:51.065344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:56:24.812 [2024-06-11 03:52:51.065356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.065362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.065375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.065381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.066670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.066682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.071618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.071633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.071641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.071653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.071660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.071671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.071678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.071692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.071698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.071710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.071717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.071729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.071735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.071747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.071754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.071765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.071772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.071784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.071790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.071802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.071809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.071821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.813 [2024-06-11 03:52:51.071827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:56:24.813 [2024-06-11 03:52:51.071839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.071845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.071857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.071864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.071876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.071882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.071894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.071901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.071914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.071920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.071932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.071939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.071951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.071957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.071969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.071976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.071988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.071994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.814 [2024-06-11 03:52:51.072451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:56:24.814 [2024-06-11 03:52:51.072463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.072469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.072488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.815 [2024-06-11 03:52:51.072507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.072683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.072717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.072739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.072761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.072783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.072808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.072830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.072851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.072873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.072895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.072917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.072939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.072961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.072983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.072998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.815 [2024-06-11 03:52:51.073385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.815 [2024-06-11 03:52:51.073407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.815 [2024-06-11 03:52:51.073429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.815 [2024-06-11 03:52:51.073451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.815 [2024-06-11 03:52:51.073473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:56:24.815 [2024-06-11 03:52:51.073488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.815 [2024-06-11 03:52:51.073495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.816 [2024-06-11 03:52:51.073517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.816 [2024-06-11 03:52:51.073539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.816 [2024-06-11 03:52:51.073561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.816 [2024-06-11 03:52:51.073582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.816 [2024-06-11 03:52:51.073606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.816 [2024-06-11 03:52:51.073628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.816 [2024-06-11 03:52:51.073650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.816 [2024-06-11 03:52:51.073672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.816 [2024-06-11 03:52:51.073694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.816 [2024-06-11 03:52:51.073716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.816 [2024-06-11 03:52:51.073738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.073760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.073782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.073803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.073825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.073847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.073869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.073892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.073914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.073936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.073958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.073980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.073995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.074002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.074022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.074030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.074045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.074051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.074067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.074073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.074088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.074095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.074110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.074117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.074132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.074139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.074155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.074162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.074178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.074184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.074199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.074206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.074221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.074228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.074244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.074250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:52:51.074383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.816 [2024-06-11 03:52:51.074391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:53:03.716447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.816 [2024-06-11 03:53:03.716486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:53:03.716516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.816 [2024-06-11 03:53:03.716524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:53:03.716537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.816 [2024-06-11 03:53:03.716544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:56:24.816 [2024-06-11 03:53:03.716557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.816 [2024-06-11 03:53:03.716564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.716576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.817 [2024-06-11 03:53:03.716583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.716595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.817 [2024-06-11 03:53:03.716601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.716613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.817 [2024-06-11 03:53:03.716624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.716636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.817 [2024-06-11 03:53:03.716643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.716655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:56:24.817 [2024-06-11 03:53:03.716662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.718070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.718091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.718107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.718114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.718126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.718133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.718145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.718152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.718164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.718171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.718182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.718189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.718201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.718208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.718220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.718226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.718239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.718245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.720107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.720128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.720142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.720149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.720161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.720168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.720180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.720186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.720198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.720205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.720217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.720223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.720235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.720242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:56:24.817 [2024-06-11 03:53:03.720254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:56:24.817 [2024-06-11 03:53:03.720260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:56:24.817 Received shutdown signal, test time was about 26.864768 seconds 00:56:24.817 00:56:24.817 Latency(us) 00:56:24.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:56:24.817 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:56:24.817 Verification LBA range: start 0x0 length 0x4000 00:56:24.817 Nvme0n1 : 26.86 10460.93 40.86 0.00 0.00 12216.60 225.28 3083812.08 00:56:24.817 =================================================================================================================== 00:56:24.817 Total : 10460.93 40.86 0.00 0.00 12216.60 225.28 3083812.08 00:56:24.817 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:56:25.078 rmmod nvme_tcp 00:56:25.078 rmmod nvme_fabrics 00:56:25.078 rmmod nvme_keyring 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2361868 ']' 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2361868 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@949 -- # '[' -z 2361868 ']' 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # kill -0 2361868 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # uname 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2361868 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2361868' 00:56:25.078 killing process with pid 2361868 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # kill 2361868 00:56:25.078 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # wait 2361868 00:56:25.337 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:56:25.337 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:56:25.337 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:56:25.337 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:56:25.337 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:56:25.337 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:25.337 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:56:25.337 03:53:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:27.243 03:53:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:56:27.243 00:56:27.243 real 0m38.379s 00:56:27.243 user 1m41.787s 00:56:27.243 sys 0m11.103s 00:56:27.502 03:53:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # xtrace_disable 00:56:27.502 03:53:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:56:27.502 ************************************ 00:56:27.502 END TEST nvmf_host_multipath_status 00:56:27.502 ************************************ 00:56:27.502 03:53:08 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:56:27.502 03:53:08 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:56:27.502 03:53:08 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:56:27.502 03:53:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:56:27.502 ************************************ 00:56:27.502 START TEST nvmf_discovery_remove_ifc 00:56:27.502 ************************************ 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:56:27.502 * Looking for test storage... 00:56:27.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:56:27.502 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:27.503 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:56:27.503 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:27.503 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:56:27.503 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:56:27.503 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:56:27.503 03:53:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:56:34.070 Found 0000:86:00.0 (0x8086 - 0x159b) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:56:34.070 Found 0000:86:00.1 (0x8086 - 0x159b) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:56:34.070 Found net devices under 0000:86:00.0: cvl_0_0 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:56:34.070 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:56:34.071 Found net devices under 0000:86:00.1: cvl_0_1 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:56:34.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:56:34.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:56:34.071 00:56:34.071 --- 10.0.0.2 ping statistics --- 00:56:34.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:34.071 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:56:34.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:56:34.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:56:34.071 00:56:34.071 --- 10.0.0.1 ping statistics --- 00:56:34.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:34.071 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:56:34.071 03:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@723 -- # xtrace_disable 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2370693 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2370693 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 2370693 ']' 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:56:34.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:34.071 [2024-06-11 03:53:15.066665] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:56:34.071 [2024-06-11 03:53:15.066707] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:56:34.071 EAL: No free 2048 kB hugepages reported on node 1 00:56:34.071 [2024-06-11 03:53:15.126887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:34.071 [2024-06-11 03:53:15.166869] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:56:34.071 [2024-06-11 03:53:15.166906] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:56:34.071 [2024-06-11 03:53:15.166914] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:56:34.071 [2024-06-11 03:53:15.166921] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:56:34.071 [2024-06-11 03:53:15.166926] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:56:34.071 [2024-06-11 03:53:15.166942] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@729 -- # xtrace_disable 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:34.071 [2024-06-11 03:53:15.297535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:56:34.071 [2024-06-11 03:53:15.305670] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:56:34.071 null0 00:56:34.071 [2024-06-11 03:53:15.337688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2370719 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2370719 /tmp/host.sock 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # '[' -z 2370719 ']' 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local rpc_addr=/tmp/host.sock 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local max_retries=100 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:56:34.071 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # xtrace_disable 00:56:34.071 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:34.071 [2024-06-11 03:53:15.401863] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:56:34.071 [2024-06-11 03:53:15.401900] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2370719 ] 00:56:34.071 EAL: No free 2048 kB hugepages reported on node 1 00:56:34.071 [2024-06-11 03:53:15.460272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:56:34.330 [2024-06-11 03:53:15.501771] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:56:34.331 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:56:34.331 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@863 -- # return 0 00:56:34.331 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:56:34.331 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:56:34.331 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:34.331 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:34.331 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:34.331 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:56:34.331 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:34.331 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:34.331 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:34.331 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:56:34.331 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:34.331 03:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:35.708 [2024-06-11 03:53:16.675107] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:56:35.708 [2024-06-11 03:53:16.675126] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:56:35.708 [2024-06-11 03:53:16.675139] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:56:35.708 [2024-06-11 03:53:16.763410] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:56:35.708 [2024-06-11 03:53:16.865326] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:56:35.708 [2024-06-11 03:53:16.865374] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:56:35.708 [2024-06-11 03:53:16.865393] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:56:35.708 [2024-06-11 03:53:16.865406] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:56:35.708 [2024-06-11 03:53:16.865423] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:56:35.708 03:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:35.708 03:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:56:35.708 03:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:56:35.708 03:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:56:35.708 03:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:56:35.708 03:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:35.708 03:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:35.708 [2024-06-11 03:53:16.873004] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbffa40 was disconnected and freed. delete nvme_qpair. 00:56:35.708 03:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:56:35.708 03:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:56:35.708 03:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:35.708 03:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:56:35.708 03:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:56:35.708 03:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:56:35.708 03:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:56:35.708 03:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:56:35.708 03:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:56:35.708 03:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:56:35.708 03:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:35.708 03:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:56:35.708 03:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:35.708 03:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:56:35.708 03:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:35.708 03:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:56:35.708 03:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:56:37.086 03:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:56:37.086 03:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:56:37.086 03:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:56:37.086 03:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:37.086 03:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:56:37.086 03:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:37.086 03:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:56:37.086 03:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:37.086 03:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:56:37.086 03:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:56:38.023 03:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:56:38.023 03:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:56:38.023 03:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:56:38.023 03:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:38.023 03:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:56:38.023 03:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:38.023 03:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:56:38.023 03:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:38.023 03:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:56:38.023 03:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:56:38.958 03:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:56:38.958 03:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:56:38.959 03:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:38.959 03:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:56:38.959 03:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:38.959 03:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:56:38.959 03:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:56:38.959 03:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:38.959 03:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:56:38.959 03:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:56:39.894 03:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:56:39.894 03:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:56:39.894 03:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:56:39.894 03:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:39.894 03:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:56:39.894 03:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:39.894 03:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:56:39.894 03:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:39.894 03:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:56:39.894 03:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:56:41.273 03:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:56:41.273 03:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:56:41.273 03:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:56:41.273 03:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:56:41.273 03:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:41.273 03:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:41.273 03:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:56:41.273 03:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:41.274 [2024-06-11 03:53:22.306811] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:56:41.274 [2024-06-11 03:53:22.306851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:56:41.274 [2024-06-11 03:53:22.306865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:41.274 [2024-06-11 03:53:22.306876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:56:41.274 [2024-06-11 03:53:22.306885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:41.274 [2024-06-11 03:53:22.306894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:56:41.274 [2024-06-11 03:53:22.306904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:41.274 [2024-06-11 03:53:22.306913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:56:41.274 [2024-06-11 03:53:22.306923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:41.274 [2024-06-11 03:53:22.306933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:56:41.274 [2024-06-11 03:53:22.306942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:41.274 [2024-06-11 03:53:22.306952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc6290 is same with the state(5) to be set 00:56:41.274 [2024-06-11 03:53:22.316834] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc6290 (9): Bad file descriptor 00:56:41.274 03:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:56:41.274 03:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:56:41.274 [2024-06-11 03:53:22.326872] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:56:42.207 03:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:56:42.207 03:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:56:42.207 03:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:56:42.207 03:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:42.207 03:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:42.207 03:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:56:42.207 03:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:56:42.207 [2024-06-11 03:53:23.389078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:56:42.207 [2024-06-11 03:53:23.389123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc6290 with addr=10.0.0.2, port=4420 00:56:42.207 [2024-06-11 03:53:23.389145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc6290 is same with the state(5) to be set 00:56:42.207 [2024-06-11 03:53:23.389179] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc6290 (9): Bad file descriptor 00:56:42.207 [2024-06-11 03:53:23.389626] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:56:42.207 [2024-06-11 03:53:23.389655] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:56:42.207 [2024-06-11 03:53:23.389671] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:56:42.207 [2024-06-11 03:53:23.389689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:56:42.207 [2024-06-11 03:53:23.389714] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:56:42.207 [2024-06-11 03:53:23.389732] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:56:42.207 03:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:42.207 03:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:56:42.207 03:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:56:43.143 [2024-06-11 03:53:24.392225] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:56:43.143 [2024-06-11 03:53:24.392261] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:56:43.143 [2024-06-11 03:53:24.392285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:56:43.143 [2024-06-11 03:53:24.392300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:43.143 [2024-06-11 03:53:24.392312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:56:43.143 [2024-06-11 03:53:24.392323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:43.143 [2024-06-11 03:53:24.392334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:56:43.143 [2024-06-11 03:53:24.392344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:43.143 [2024-06-11 03:53:24.392355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:56:43.143 [2024-06-11 03:53:24.392364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:43.143 [2024-06-11 03:53:24.392375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:56:43.143 [2024-06-11 03:53:24.392384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:56:43.143 [2024-06-11 03:53:24.392393] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:56:43.143 [2024-06-11 03:53:24.392449] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc56e0 (9): Bad file descriptor 00:56:43.143 [2024-06-11 03:53:24.393447] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:56:43.143 [2024-06-11 03:53:24.393464] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:56:43.143 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:56:43.143 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:56:43.143 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:56:43.143 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:43.143 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:56:43.143 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:43.143 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:56:43.143 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:43.143 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:56:43.143 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:56:43.143 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:56:43.402 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:56:43.402 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:56:43.402 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:56:43.402 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:56:43.402 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:43.402 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:56:43.402 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:43.402 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:56:43.402 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:43.402 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:56:43.402 03:53:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:56:44.337 03:53:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:56:44.337 03:53:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:56:44.337 03:53:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:56:44.337 03:53:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:44.337 03:53:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:44.337 03:53:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:56:44.337 03:53:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:56:44.337 03:53:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:44.337 03:53:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:56:44.337 03:53:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:56:45.272 [2024-06-11 03:53:26.450573] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:56:45.272 [2024-06-11 03:53:26.450590] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:56:45.272 [2024-06-11 03:53:26.450601] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:56:45.272 [2024-06-11 03:53:26.577012] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:56:45.272 03:53:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:56:45.272 03:53:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:56:45.272 03:53:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:56:45.272 03:53:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:45.272 03:53:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:56:45.272 03:53:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:45.272 03:53:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:56:45.272 03:53:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:45.530 03:53:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:56:45.530 03:53:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:56:45.530 [2024-06-11 03:53:26.759536] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:56:45.530 [2024-06-11 03:53:26.759570] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:56:45.530 [2024-06-11 03:53:26.759586] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:56:45.530 [2024-06-11 03:53:26.759598] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:56:45.530 [2024-06-11 03:53:26.759606] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:56:45.530 [2024-06-11 03:53:26.768395] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xbe7540 was disconnected and freed. delete nvme_qpair. 00:56:46.465 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:56:46.465 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:56:46.465 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:56:46.465 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:56:46.465 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:56:46.466 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:46.466 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:56:46.466 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:56:46.466 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:56:46.466 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:56:46.466 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2370719 00:56:46.466 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 2370719 ']' 00:56:46.466 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 2370719 00:56:46.466 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:56:46.466 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:56:46.466 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2370719 00:56:46.466 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:56:46.466 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:56:46.466 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2370719' 00:56:46.466 killing process with pid 2370719 00:56:46.466 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 2370719 00:56:46.466 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 2370719 00:56:46.724 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:56:46.724 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:56:46.724 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:56:46.724 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:56:46.724 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:56:46.724 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:56:46.724 03:53:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:56:46.724 rmmod nvme_tcp 00:56:46.724 rmmod nvme_fabrics 00:56:46.724 rmmod nvme_keyring 00:56:46.724 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:56:46.724 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:56:46.724 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:56:46.724 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2370693 ']' 00:56:46.724 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2370693 00:56:46.724 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@949 -- # '[' -z 2370693 ']' 00:56:46.724 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # kill -0 2370693 00:56:46.724 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # uname 00:56:46.724 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:56:46.724 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2370693 00:56:46.724 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:56:46.724 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:56:46.724 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2370693' 00:56:46.724 killing process with pid 2370693 00:56:46.724 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # kill 2370693 00:56:46.724 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # wait 2370693 00:56:46.983 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:56:46.983 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:56:46.983 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:56:46.983 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:56:46.983 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:56:46.983 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:46.983 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:56:46.983 03:53:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:48.921 03:53:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:56:48.921 00:56:48.921 real 0m21.606s 00:56:48.921 user 0m26.722s 00:56:48.921 sys 0m5.927s 00:56:49.180 03:53:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:56:49.180 03:53:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:56:49.180 ************************************ 00:56:49.180 END TEST nvmf_discovery_remove_ifc 00:56:49.180 ************************************ 00:56:49.181 03:53:30 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:56:49.181 03:53:30 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:56:49.181 03:53:30 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:56:49.181 03:53:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:56:49.181 ************************************ 00:56:49.181 START TEST nvmf_identify_kernel_target 00:56:49.181 ************************************ 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:56:49.181 * Looking for test storage... 00:56:49.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:56:49.181 03:53:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:56:54.443 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:56:54.443 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:56:54.443 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:56:54.443 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:56:54.443 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:56:54.443 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:56:54.443 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:56:54.443 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:56:54.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:56:54.444 Found 0000:86:00.1 (0x8086 - 0x159b) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:56:54.444 Found net devices under 0000:86:00.0: cvl_0_0 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:56:54.444 Found net devices under 0000:86:00.1: cvl_0_1 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:56:54.444 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:56:54.703 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:56:54.703 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:56:54.703 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:56:54.703 03:53:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:56:54.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:56:54.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:56:54.703 00:56:54.703 --- 10.0.0.2 ping statistics --- 00:56:54.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:54.703 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:56:54.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:56:54.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:56:54.703 00:56:54.703 --- 10.0.0.1 ping statistics --- 00:56:54.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:56:54.703 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:56:54.703 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:56:54.961 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:56:54.961 03:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:56:57.494 Waiting for block devices as requested 00:56:57.494 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:56:57.753 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:56:57.753 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:56:57.753 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:56:57.753 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:56:57.753 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:56:58.013 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:56:58.013 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:56:58.013 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:56:58.013 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:56:58.272 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:56:58.272 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:56:58.272 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:56:58.531 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:56:58.531 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:56:58.531 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:56:58.531 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:56:58.790 03:53:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:56:58.790 03:53:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:56:58.790 03:53:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:56:58.791 03:53:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:56:58.791 03:53:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:56:58.791 03:53:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:56:58.791 03:53:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:56:58.791 03:53:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:56:58.791 03:53:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:56:58.791 No valid GPT data, bailing 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:56:58.791 00:56:58.791 Discovery Log Number of Records 2, Generation counter 2 00:56:58.791 =====Discovery Log Entry 0====== 00:56:58.791 trtype: tcp 00:56:58.791 adrfam: ipv4 00:56:58.791 subtype: current discovery subsystem 00:56:58.791 treq: not specified, sq flow control disable supported 00:56:58.791 portid: 1 00:56:58.791 trsvcid: 4420 00:56:58.791 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:56:58.791 traddr: 10.0.0.1 00:56:58.791 eflags: none 00:56:58.791 sectype: none 00:56:58.791 =====Discovery Log Entry 1====== 00:56:58.791 trtype: tcp 00:56:58.791 adrfam: ipv4 00:56:58.791 subtype: nvme subsystem 00:56:58.791 treq: not specified, sq flow control disable supported 00:56:58.791 portid: 1 00:56:58.791 trsvcid: 4420 00:56:58.791 subnqn: nqn.2016-06.io.spdk:testnqn 00:56:58.791 traddr: 10.0.0.1 00:56:58.791 eflags: none 00:56:58.791 sectype: none 00:56:58.791 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:56:58.791 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:56:58.791 EAL: No free 2048 kB hugepages reported on node 1 00:56:59.051 ===================================================== 00:56:59.051 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:56:59.051 ===================================================== 00:56:59.051 Controller Capabilities/Features 00:56:59.051 ================================ 00:56:59.051 Vendor ID: 0000 00:56:59.051 Subsystem Vendor ID: 0000 00:56:59.051 Serial Number: affa0af6aa486ebc599b 00:56:59.051 Model Number: Linux 00:56:59.051 Firmware Version: 6.7.0-68 00:56:59.051 Recommended Arb Burst: 0 00:56:59.051 IEEE OUI Identifier: 00 00 00 00:56:59.051 Multi-path I/O 00:56:59.051 May have multiple subsystem ports: No 00:56:59.051 May have multiple controllers: No 00:56:59.051 Associated with SR-IOV VF: No 00:56:59.051 Max Data Transfer Size: Unlimited 00:56:59.051 Max Number of Namespaces: 0 00:56:59.051 Max Number of I/O Queues: 1024 00:56:59.051 NVMe Specification Version (VS): 1.3 00:56:59.051 NVMe Specification Version (Identify): 1.3 00:56:59.051 Maximum Queue Entries: 1024 00:56:59.051 Contiguous Queues Required: No 00:56:59.051 Arbitration Mechanisms Supported 00:56:59.051 Weighted Round Robin: Not Supported 00:56:59.051 Vendor Specific: Not Supported 00:56:59.051 Reset Timeout: 7500 ms 00:56:59.051 Doorbell Stride: 4 bytes 00:56:59.051 NVM Subsystem Reset: Not Supported 00:56:59.051 Command Sets Supported 00:56:59.051 NVM Command Set: Supported 00:56:59.051 Boot Partition: Not Supported 00:56:59.051 Memory Page Size Minimum: 4096 bytes 00:56:59.051 Memory Page Size Maximum: 4096 bytes 00:56:59.051 Persistent Memory Region: Not Supported 00:56:59.051 Optional Asynchronous Events Supported 00:56:59.051 Namespace Attribute Notices: Not Supported 00:56:59.051 Firmware Activation Notices: Not Supported 00:56:59.051 ANA Change Notices: Not Supported 00:56:59.051 PLE Aggregate Log Change Notices: Not Supported 00:56:59.051 LBA Status Info Alert Notices: Not Supported 00:56:59.051 EGE Aggregate Log Change Notices: Not Supported 00:56:59.051 Normal NVM Subsystem Shutdown event: Not Supported 00:56:59.051 Zone Descriptor Change Notices: Not Supported 00:56:59.051 Discovery Log Change Notices: Supported 00:56:59.051 Controller Attributes 00:56:59.051 128-bit Host Identifier: Not Supported 00:56:59.051 Non-Operational Permissive Mode: Not Supported 00:56:59.051 NVM Sets: Not Supported 00:56:59.051 Read Recovery Levels: Not Supported 00:56:59.051 Endurance Groups: Not Supported 00:56:59.051 Predictable Latency Mode: Not Supported 00:56:59.051 Traffic Based Keep ALive: Not Supported 00:56:59.051 Namespace Granularity: Not Supported 00:56:59.051 SQ Associations: Not Supported 00:56:59.051 UUID List: Not Supported 00:56:59.051 Multi-Domain Subsystem: Not Supported 00:56:59.051 Fixed Capacity Management: Not Supported 00:56:59.051 Variable Capacity Management: Not Supported 00:56:59.051 Delete Endurance Group: Not Supported 00:56:59.051 Delete NVM Set: Not Supported 00:56:59.051 Extended LBA Formats Supported: Not Supported 00:56:59.051 Flexible Data Placement Supported: Not Supported 00:56:59.051 00:56:59.051 Controller Memory Buffer Support 00:56:59.051 ================================ 00:56:59.051 Supported: No 00:56:59.051 00:56:59.051 Persistent Memory Region Support 00:56:59.051 ================================ 00:56:59.051 Supported: No 00:56:59.051 00:56:59.051 Admin Command Set Attributes 00:56:59.051 ============================ 00:56:59.051 Security Send/Receive: Not Supported 00:56:59.051 Format NVM: Not Supported 00:56:59.051 Firmware Activate/Download: Not Supported 00:56:59.051 Namespace Management: Not Supported 00:56:59.051 Device Self-Test: Not Supported 00:56:59.051 Directives: Not Supported 00:56:59.051 NVMe-MI: Not Supported 00:56:59.051 Virtualization Management: Not Supported 00:56:59.051 Doorbell Buffer Config: Not Supported 00:56:59.051 Get LBA Status Capability: Not Supported 00:56:59.051 Command & Feature Lockdown Capability: Not Supported 00:56:59.051 Abort Command Limit: 1 00:56:59.051 Async Event Request Limit: 1 00:56:59.051 Number of Firmware Slots: N/A 00:56:59.051 Firmware Slot 1 Read-Only: N/A 00:56:59.051 Firmware Activation Without Reset: N/A 00:56:59.051 Multiple Update Detection Support: N/A 00:56:59.051 Firmware Update Granularity: No Information Provided 00:56:59.051 Per-Namespace SMART Log: No 00:56:59.051 Asymmetric Namespace Access Log Page: Not Supported 00:56:59.051 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:56:59.051 Command Effects Log Page: Not Supported 00:56:59.051 Get Log Page Extended Data: Supported 00:56:59.051 Telemetry Log Pages: Not Supported 00:56:59.051 Persistent Event Log Pages: Not Supported 00:56:59.051 Supported Log Pages Log Page: May Support 00:56:59.051 Commands Supported & Effects Log Page: Not Supported 00:56:59.051 Feature Identifiers & Effects Log Page:May Support 00:56:59.051 NVMe-MI Commands & Effects Log Page: May Support 00:56:59.051 Data Area 4 for Telemetry Log: Not Supported 00:56:59.051 Error Log Page Entries Supported: 1 00:56:59.051 Keep Alive: Not Supported 00:56:59.051 00:56:59.052 NVM Command Set Attributes 00:56:59.052 ========================== 00:56:59.052 Submission Queue Entry Size 00:56:59.052 Max: 1 00:56:59.052 Min: 1 00:56:59.052 Completion Queue Entry Size 00:56:59.052 Max: 1 00:56:59.052 Min: 1 00:56:59.052 Number of Namespaces: 0 00:56:59.052 Compare Command: Not Supported 00:56:59.052 Write Uncorrectable Command: Not Supported 00:56:59.052 Dataset Management Command: Not Supported 00:56:59.052 Write Zeroes Command: Not Supported 00:56:59.052 Set Features Save Field: Not Supported 00:56:59.052 Reservations: Not Supported 00:56:59.052 Timestamp: Not Supported 00:56:59.052 Copy: Not Supported 00:56:59.052 Volatile Write Cache: Not Present 00:56:59.052 Atomic Write Unit (Normal): 1 00:56:59.052 Atomic Write Unit (PFail): 1 00:56:59.052 Atomic Compare & Write Unit: 1 00:56:59.052 Fused Compare & Write: Not Supported 00:56:59.052 Scatter-Gather List 00:56:59.052 SGL Command Set: Supported 00:56:59.052 SGL Keyed: Not Supported 00:56:59.052 SGL Bit Bucket Descriptor: Not Supported 00:56:59.052 SGL Metadata Pointer: Not Supported 00:56:59.052 Oversized SGL: Not Supported 00:56:59.052 SGL Metadata Address: Not Supported 00:56:59.052 SGL Offset: Supported 00:56:59.052 Transport SGL Data Block: Not Supported 00:56:59.052 Replay Protected Memory Block: Not Supported 00:56:59.052 00:56:59.052 Firmware Slot Information 00:56:59.052 ========================= 00:56:59.052 Active slot: 0 00:56:59.052 00:56:59.052 00:56:59.052 Error Log 00:56:59.052 ========= 00:56:59.052 00:56:59.052 Active Namespaces 00:56:59.052 ================= 00:56:59.052 Discovery Log Page 00:56:59.052 ================== 00:56:59.052 Generation Counter: 2 00:56:59.052 Number of Records: 2 00:56:59.052 Record Format: 0 00:56:59.052 00:56:59.052 Discovery Log Entry 0 00:56:59.052 ---------------------- 00:56:59.052 Transport Type: 3 (TCP) 00:56:59.052 Address Family: 1 (IPv4) 00:56:59.052 Subsystem Type: 3 (Current Discovery Subsystem) 00:56:59.052 Entry Flags: 00:56:59.052 Duplicate Returned Information: 0 00:56:59.052 Explicit Persistent Connection Support for Discovery: 0 00:56:59.052 Transport Requirements: 00:56:59.052 Secure Channel: Not Specified 00:56:59.052 Port ID: 1 (0x0001) 00:56:59.052 Controller ID: 65535 (0xffff) 00:56:59.052 Admin Max SQ Size: 32 00:56:59.052 Transport Service Identifier: 4420 00:56:59.052 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:56:59.052 Transport Address: 10.0.0.1 00:56:59.052 Discovery Log Entry 1 00:56:59.052 ---------------------- 00:56:59.052 Transport Type: 3 (TCP) 00:56:59.052 Address Family: 1 (IPv4) 00:56:59.052 Subsystem Type: 2 (NVM Subsystem) 00:56:59.052 Entry Flags: 00:56:59.052 Duplicate Returned Information: 0 00:56:59.052 Explicit Persistent Connection Support for Discovery: 0 00:56:59.052 Transport Requirements: 00:56:59.052 Secure Channel: Not Specified 00:56:59.052 Port ID: 1 (0x0001) 00:56:59.052 Controller ID: 65535 (0xffff) 00:56:59.052 Admin Max SQ Size: 32 00:56:59.052 Transport Service Identifier: 4420 00:56:59.052 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:56:59.052 Transport Address: 10.0.0.1 00:56:59.052 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:56:59.052 EAL: No free 2048 kB hugepages reported on node 1 00:56:59.052 get_feature(0x01) failed 00:56:59.052 get_feature(0x02) failed 00:56:59.052 get_feature(0x04) failed 00:56:59.052 ===================================================== 00:56:59.052 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:56:59.052 ===================================================== 00:56:59.052 Controller Capabilities/Features 00:56:59.052 ================================ 00:56:59.052 Vendor ID: 0000 00:56:59.052 Subsystem Vendor ID: 0000 00:56:59.052 Serial Number: 102cdcf779b25ba007f3 00:56:59.052 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:56:59.052 Firmware Version: 6.7.0-68 00:56:59.052 Recommended Arb Burst: 6 00:56:59.052 IEEE OUI Identifier: 00 00 00 00:56:59.052 Multi-path I/O 00:56:59.052 May have multiple subsystem ports: Yes 00:56:59.052 May have multiple controllers: Yes 00:56:59.052 Associated with SR-IOV VF: No 00:56:59.052 Max Data Transfer Size: Unlimited 00:56:59.052 Max Number of Namespaces: 1024 00:56:59.052 Max Number of I/O Queues: 128 00:56:59.052 NVMe Specification Version (VS): 1.3 00:56:59.052 NVMe Specification Version (Identify): 1.3 00:56:59.052 Maximum Queue Entries: 1024 00:56:59.052 Contiguous Queues Required: No 00:56:59.052 Arbitration Mechanisms Supported 00:56:59.052 Weighted Round Robin: Not Supported 00:56:59.052 Vendor Specific: Not Supported 00:56:59.052 Reset Timeout: 7500 ms 00:56:59.052 Doorbell Stride: 4 bytes 00:56:59.052 NVM Subsystem Reset: Not Supported 00:56:59.052 Command Sets Supported 00:56:59.052 NVM Command Set: Supported 00:56:59.052 Boot Partition: Not Supported 00:56:59.052 Memory Page Size Minimum: 4096 bytes 00:56:59.052 Memory Page Size Maximum: 4096 bytes 00:56:59.052 Persistent Memory Region: Not Supported 00:56:59.052 Optional Asynchronous Events Supported 00:56:59.052 Namespace Attribute Notices: Supported 00:56:59.052 Firmware Activation Notices: Not Supported 00:56:59.052 ANA Change Notices: Supported 00:56:59.052 PLE Aggregate Log Change Notices: Not Supported 00:56:59.052 LBA Status Info Alert Notices: Not Supported 00:56:59.052 EGE Aggregate Log Change Notices: Not Supported 00:56:59.052 Normal NVM Subsystem Shutdown event: Not Supported 00:56:59.052 Zone Descriptor Change Notices: Not Supported 00:56:59.053 Discovery Log Change Notices: Not Supported 00:56:59.053 Controller Attributes 00:56:59.053 128-bit Host Identifier: Supported 00:56:59.053 Non-Operational Permissive Mode: Not Supported 00:56:59.053 NVM Sets: Not Supported 00:56:59.053 Read Recovery Levels: Not Supported 00:56:59.053 Endurance Groups: Not Supported 00:56:59.053 Predictable Latency Mode: Not Supported 00:56:59.053 Traffic Based Keep ALive: Supported 00:56:59.053 Namespace Granularity: Not Supported 00:56:59.053 SQ Associations: Not Supported 00:56:59.053 UUID List: Not Supported 00:56:59.053 Multi-Domain Subsystem: Not Supported 00:56:59.053 Fixed Capacity Management: Not Supported 00:56:59.053 Variable Capacity Management: Not Supported 00:56:59.053 Delete Endurance Group: Not Supported 00:56:59.053 Delete NVM Set: Not Supported 00:56:59.053 Extended LBA Formats Supported: Not Supported 00:56:59.053 Flexible Data Placement Supported: Not Supported 00:56:59.053 00:56:59.053 Controller Memory Buffer Support 00:56:59.053 ================================ 00:56:59.053 Supported: No 00:56:59.053 00:56:59.053 Persistent Memory Region Support 00:56:59.053 ================================ 00:56:59.053 Supported: No 00:56:59.053 00:56:59.053 Admin Command Set Attributes 00:56:59.053 ============================ 00:56:59.053 Security Send/Receive: Not Supported 00:56:59.053 Format NVM: Not Supported 00:56:59.053 Firmware Activate/Download: Not Supported 00:56:59.053 Namespace Management: Not Supported 00:56:59.053 Device Self-Test: Not Supported 00:56:59.053 Directives: Not Supported 00:56:59.053 NVMe-MI: Not Supported 00:56:59.053 Virtualization Management: Not Supported 00:56:59.053 Doorbell Buffer Config: Not Supported 00:56:59.053 Get LBA Status Capability: Not Supported 00:56:59.053 Command & Feature Lockdown Capability: Not Supported 00:56:59.053 Abort Command Limit: 4 00:56:59.053 Async Event Request Limit: 4 00:56:59.053 Number of Firmware Slots: N/A 00:56:59.053 Firmware Slot 1 Read-Only: N/A 00:56:59.053 Firmware Activation Without Reset: N/A 00:56:59.053 Multiple Update Detection Support: N/A 00:56:59.053 Firmware Update Granularity: No Information Provided 00:56:59.053 Per-Namespace SMART Log: Yes 00:56:59.053 Asymmetric Namespace Access Log Page: Supported 00:56:59.053 ANA Transition Time : 10 sec 00:56:59.053 00:56:59.053 Asymmetric Namespace Access Capabilities 00:56:59.053 ANA Optimized State : Supported 00:56:59.053 ANA Non-Optimized State : Supported 00:56:59.053 ANA Inaccessible State : Supported 00:56:59.053 ANA Persistent Loss State : Supported 00:56:59.053 ANA Change State : Supported 00:56:59.053 ANAGRPID is not changed : No 00:56:59.053 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:56:59.053 00:56:59.053 ANA Group Identifier Maximum : 128 00:56:59.053 Number of ANA Group Identifiers : 128 00:56:59.053 Max Number of Allowed Namespaces : 1024 00:56:59.053 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:56:59.053 Command Effects Log Page: Supported 00:56:59.053 Get Log Page Extended Data: Supported 00:56:59.053 Telemetry Log Pages: Not Supported 00:56:59.053 Persistent Event Log Pages: Not Supported 00:56:59.053 Supported Log Pages Log Page: May Support 00:56:59.053 Commands Supported & Effects Log Page: Not Supported 00:56:59.053 Feature Identifiers & Effects Log Page:May Support 00:56:59.053 NVMe-MI Commands & Effects Log Page: May Support 00:56:59.053 Data Area 4 for Telemetry Log: Not Supported 00:56:59.053 Error Log Page Entries Supported: 128 00:56:59.053 Keep Alive: Supported 00:56:59.053 Keep Alive Granularity: 1000 ms 00:56:59.053 00:56:59.053 NVM Command Set Attributes 00:56:59.053 ========================== 00:56:59.053 Submission Queue Entry Size 00:56:59.053 Max: 64 00:56:59.053 Min: 64 00:56:59.053 Completion Queue Entry Size 00:56:59.053 Max: 16 00:56:59.053 Min: 16 00:56:59.053 Number of Namespaces: 1024 00:56:59.053 Compare Command: Not Supported 00:56:59.053 Write Uncorrectable Command: Not Supported 00:56:59.053 Dataset Management Command: Supported 00:56:59.053 Write Zeroes Command: Supported 00:56:59.053 Set Features Save Field: Not Supported 00:56:59.053 Reservations: Not Supported 00:56:59.053 Timestamp: Not Supported 00:56:59.053 Copy: Not Supported 00:56:59.053 Volatile Write Cache: Present 00:56:59.053 Atomic Write Unit (Normal): 1 00:56:59.053 Atomic Write Unit (PFail): 1 00:56:59.053 Atomic Compare & Write Unit: 1 00:56:59.053 Fused Compare & Write: Not Supported 00:56:59.053 Scatter-Gather List 00:56:59.053 SGL Command Set: Supported 00:56:59.053 SGL Keyed: Not Supported 00:56:59.053 SGL Bit Bucket Descriptor: Not Supported 00:56:59.053 SGL Metadata Pointer: Not Supported 00:56:59.053 Oversized SGL: Not Supported 00:56:59.053 SGL Metadata Address: Not Supported 00:56:59.053 SGL Offset: Supported 00:56:59.053 Transport SGL Data Block: Not Supported 00:56:59.053 Replay Protected Memory Block: Not Supported 00:56:59.053 00:56:59.053 Firmware Slot Information 00:56:59.053 ========================= 00:56:59.053 Active slot: 0 00:56:59.053 00:56:59.053 Asymmetric Namespace Access 00:56:59.053 =========================== 00:56:59.053 Change Count : 0 00:56:59.053 Number of ANA Group Descriptors : 1 00:56:59.053 ANA Group Descriptor : 0 00:56:59.053 ANA Group ID : 1 00:56:59.053 Number of NSID Values : 1 00:56:59.053 Change Count : 0 00:56:59.053 ANA State : 1 00:56:59.053 Namespace Identifier : 1 00:56:59.053 00:56:59.053 Commands Supported and Effects 00:56:59.053 ============================== 00:56:59.053 Admin Commands 00:56:59.053 -------------- 00:56:59.053 Get Log Page (02h): Supported 00:56:59.053 Identify (06h): Supported 00:56:59.054 Abort (08h): Supported 00:56:59.054 Set Features (09h): Supported 00:56:59.054 Get Features (0Ah): Supported 00:56:59.054 Asynchronous Event Request (0Ch): Supported 00:56:59.054 Keep Alive (18h): Supported 00:56:59.054 I/O Commands 00:56:59.054 ------------ 00:56:59.054 Flush (00h): Supported 00:56:59.054 Write (01h): Supported LBA-Change 00:56:59.054 Read (02h): Supported 00:56:59.054 Write Zeroes (08h): Supported LBA-Change 00:56:59.054 Dataset Management (09h): Supported 00:56:59.054 00:56:59.054 Error Log 00:56:59.054 ========= 00:56:59.054 Entry: 0 00:56:59.054 Error Count: 0x3 00:56:59.054 Submission Queue Id: 0x0 00:56:59.054 Command Id: 0x5 00:56:59.054 Phase Bit: 0 00:56:59.054 Status Code: 0x2 00:56:59.054 Status Code Type: 0x0 00:56:59.054 Do Not Retry: 1 00:56:59.054 Error Location: 0x28 00:56:59.054 LBA: 0x0 00:56:59.054 Namespace: 0x0 00:56:59.054 Vendor Log Page: 0x0 00:56:59.054 ----------- 00:56:59.054 Entry: 1 00:56:59.054 Error Count: 0x2 00:56:59.054 Submission Queue Id: 0x0 00:56:59.054 Command Id: 0x5 00:56:59.054 Phase Bit: 0 00:56:59.054 Status Code: 0x2 00:56:59.054 Status Code Type: 0x0 00:56:59.054 Do Not Retry: 1 00:56:59.054 Error Location: 0x28 00:56:59.054 LBA: 0x0 00:56:59.054 Namespace: 0x0 00:56:59.054 Vendor Log Page: 0x0 00:56:59.054 ----------- 00:56:59.054 Entry: 2 00:56:59.054 Error Count: 0x1 00:56:59.054 Submission Queue Id: 0x0 00:56:59.054 Command Id: 0x4 00:56:59.054 Phase Bit: 0 00:56:59.054 Status Code: 0x2 00:56:59.054 Status Code Type: 0x0 00:56:59.054 Do Not Retry: 1 00:56:59.054 Error Location: 0x28 00:56:59.054 LBA: 0x0 00:56:59.054 Namespace: 0x0 00:56:59.054 Vendor Log Page: 0x0 00:56:59.054 00:56:59.054 Number of Queues 00:56:59.054 ================ 00:56:59.054 Number of I/O Submission Queues: 128 00:56:59.054 Number of I/O Completion Queues: 128 00:56:59.054 00:56:59.054 ZNS Specific Controller Data 00:56:59.054 ============================ 00:56:59.054 Zone Append Size Limit: 0 00:56:59.054 00:56:59.054 00:56:59.054 Active Namespaces 00:56:59.054 ================= 00:56:59.054 get_feature(0x05) failed 00:56:59.054 Namespace ID:1 00:56:59.054 Command Set Identifier: NVM (00h) 00:56:59.054 Deallocate: Supported 00:56:59.054 Deallocated/Unwritten Error: Not Supported 00:56:59.054 Deallocated Read Value: Unknown 00:56:59.054 Deallocate in Write Zeroes: Not Supported 00:56:59.054 Deallocated Guard Field: 0xFFFF 00:56:59.054 Flush: Supported 00:56:59.054 Reservation: Not Supported 00:56:59.054 Namespace Sharing Capabilities: Multiple Controllers 00:56:59.054 Size (in LBAs): 3125627568 (1490GiB) 00:56:59.054 Capacity (in LBAs): 3125627568 (1490GiB) 00:56:59.054 Utilization (in LBAs): 3125627568 (1490GiB) 00:56:59.054 UUID: efc061d6-d1cc-4bd0-8d1f-ad35d14e3fea 00:56:59.054 Thin Provisioning: Not Supported 00:56:59.054 Per-NS Atomic Units: Yes 00:56:59.054 Atomic Boundary Size (Normal): 0 00:56:59.054 Atomic Boundary Size (PFail): 0 00:56:59.054 Atomic Boundary Offset: 0 00:56:59.054 NGUID/EUI64 Never Reused: No 00:56:59.054 ANA group ID: 1 00:56:59.054 Namespace Write Protected: No 00:56:59.054 Number of LBA Formats: 1 00:56:59.054 Current LBA Format: LBA Format #00 00:56:59.054 LBA Format #00: Data Size: 512 Metadata Size: 0 00:56:59.054 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:56:59.054 rmmod nvme_tcp 00:56:59.054 rmmod nvme_fabrics 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:56:59.054 03:53:40 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:01.590 03:53:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:57:01.590 03:53:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:57:01.590 03:53:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:57:01.590 03:53:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:57:01.590 03:53:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:57:01.590 03:53:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:57:01.590 03:53:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:57:01.590 03:53:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:57:01.590 03:53:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:57:01.590 03:53:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:57:01.590 03:53:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:57:04.125 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:57:04.125 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:57:04.125 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:57:04.125 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:57:04.125 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:57:04.125 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:57:04.125 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:57:04.125 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:57:04.125 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:57:04.125 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:57:04.125 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:57:04.125 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:57:04.125 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:57:04.125 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:57:04.125 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:57:04.125 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:57:05.502 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:57:05.762 00:57:05.762 real 0m16.529s 00:57:05.762 user 0m3.836s 00:57:05.762 sys 0m8.302s 00:57:05.762 03:53:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:57:05.762 03:53:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:57:05.762 ************************************ 00:57:05.762 END TEST nvmf_identify_kernel_target 00:57:05.762 ************************************ 00:57:05.762 03:53:46 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:57:05.762 03:53:46 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:57:05.762 03:53:46 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:57:05.762 03:53:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:57:05.762 ************************************ 00:57:05.762 START TEST nvmf_auth_host 00:57:05.762 ************************************ 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:57:05.762 * Looking for test storage... 00:57:05.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:57:05.762 03:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:57:05.763 03:53:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:12.329 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:57:12.329 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:57:12.329 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:57:12.329 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:57:12.329 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:57:12.329 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:57:12.329 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:57:12.329 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:57:12.329 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:57:12.329 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:57:12.330 Found 0000:86:00.0 (0x8086 - 0x159b) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:57:12.330 Found 0000:86:00.1 (0x8086 - 0x159b) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:57:12.330 Found net devices under 0000:86:00.0: cvl_0_0 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:57:12.330 Found net devices under 0000:86:00.1: cvl_0_1 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:57:12.330 03:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:57:12.330 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:57:12.330 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:57:12.330 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:57:12.330 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:57:12.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:57:12.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:57:12.331 00:57:12.331 --- 10.0.0.2 ping statistics --- 00:57:12.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:12.331 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:57:12.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:57:12.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:57:12.331 00:57:12.331 --- 10.0.0.1 ping statistics --- 00:57:12.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:57:12.331 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@723 -- # xtrace_disable 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2383457 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2383457 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 2383457 ']' 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@729 -- # xtrace_disable 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=197028afc8f2468d467437b058491d2e 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.li0 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 197028afc8f2468d467437b058491d2e 0 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 197028afc8f2468d467437b058491d2e 0 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=197028afc8f2468d467437b058491d2e 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.li0 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.li0 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.li0 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cea9eb850531258985bbd9cfdfb00edd690bf9708e862e676d14ab29e0b36dfa 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qnw 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cea9eb850531258985bbd9cfdfb00edd690bf9708e862e676d14ab29e0b36dfa 3 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cea9eb850531258985bbd9cfdfb00edd690bf9708e862e676d14ab29e0b36dfa 3 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cea9eb850531258985bbd9cfdfb00edd690bf9708e862e676d14ab29e0b36dfa 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qnw 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qnw 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.qnw 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f2fc02d757674d33a4b319c60ef5af59a9f58c418918ebb1 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.jbK 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f2fc02d757674d33a4b319c60ef5af59a9f58c418918ebb1 0 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f2fc02d757674d33a4b319c60ef5af59a9f58c418918ebb1 0 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f2fc02d757674d33a4b319c60ef5af59a9f58c418918ebb1 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.jbK 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.jbK 00:57:12.331 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.jbK 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3f9902b38b5cd141c12302832c797fda4c61faaa2ee0330d 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.rSi 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3f9902b38b5cd141c12302832c797fda4c61faaa2ee0330d 2 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3f9902b38b5cd141c12302832c797fda4c61faaa2ee0330d 2 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3f9902b38b5cd141c12302832c797fda4c61faaa2ee0330d 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.rSi 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.rSi 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.rSi 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a095aa2eaa87390b57feb8422efd67fa 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.GGY 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a095aa2eaa87390b57feb8422efd67fa 1 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a095aa2eaa87390b57feb8422efd67fa 1 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a095aa2eaa87390b57feb8422efd67fa 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.GGY 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.GGY 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.GGY 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=38f91f0cea7ab853a935aa0a2e55800f 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.t53 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 38f91f0cea7ab853a935aa0a2e55800f 1 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 38f91f0cea7ab853a935aa0a2e55800f 1 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=38f91f0cea7ab853a935aa0a2e55800f 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:57:12.332 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.t53 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.t53 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.t53 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=17ad91ec07016a0c84e838c6c5e879158d6886e70bc73c0a 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.5Lb 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 17ad91ec07016a0c84e838c6c5e879158d6886e70bc73c0a 2 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 17ad91ec07016a0c84e838c6c5e879158d6886e70bc73c0a 2 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=17ad91ec07016a0c84e838c6c5e879158d6886e70bc73c0a 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.5Lb 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.5Lb 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.5Lb 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=af160cf707939d180e5c8198a6a0748b 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.V67 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key af160cf707939d180e5c8198a6a0748b 0 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 af160cf707939d180e5c8198a6a0748b 0 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=af160cf707939d180e5c8198a6a0748b 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.V67 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.V67 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.V67 00:57:12.590 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6ab538a029f2161bb498675e214496bc9cf4d645bf49982bc162b153ea0db6c0 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2hG 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6ab538a029f2161bb498675e214496bc9cf4d645bf49982bc162b153ea0db6c0 3 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6ab538a029f2161bb498675e214496bc9cf4d645bf49982bc162b153ea0db6c0 3 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6ab538a029f2161bb498675e214496bc9cf4d645bf49982bc162b153ea0db6c0 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2hG 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2hG 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.2hG 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2383457 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # '[' -z 2383457 ']' 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # local max_retries=100 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:57:12.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@839 -- # xtrace_disable 00:57:12.591 03:53:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@863 -- # return 0 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.li0 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.qnw ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qnw 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.jbK 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.rSi ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.rSi 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.GGY 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.t53 ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.t53 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.5Lb 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.V67 ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.V67 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.2hG 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:57:12.849 03:53:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:57:16.135 Waiting for block devices as requested 00:57:16.135 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:57:16.135 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:57:16.135 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:57:16.135 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:57:16.135 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:57:16.135 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:57:16.135 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:57:16.438 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:57:16.438 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:57:16.438 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:57:16.438 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:57:16.696 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:57:16.696 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:57:16.696 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:57:16.696 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:57:16.954 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:57:16.954 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:57:17.520 No valid GPT data, bailing 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:57:17.520 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:57:17.779 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:57:17.779 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:57:17.779 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:57:17.779 03:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:57:17.779 00:57:17.779 Discovery Log Number of Records 2, Generation counter 2 00:57:17.779 =====Discovery Log Entry 0====== 00:57:17.779 trtype: tcp 00:57:17.779 adrfam: ipv4 00:57:17.779 subtype: current discovery subsystem 00:57:17.779 treq: not specified, sq flow control disable supported 00:57:17.779 portid: 1 00:57:17.779 trsvcid: 4420 00:57:17.779 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:57:17.779 traddr: 10.0.0.1 00:57:17.779 eflags: none 00:57:17.779 sectype: none 00:57:17.779 =====Discovery Log Entry 1====== 00:57:17.779 trtype: tcp 00:57:17.779 adrfam: ipv4 00:57:17.779 subtype: nvme subsystem 00:57:17.779 treq: not specified, sq flow control disable supported 00:57:17.779 portid: 1 00:57:17.779 trsvcid: 4420 00:57:17.779 subnqn: nqn.2024-02.io.spdk:cnode0 00:57:17.779 traddr: 10.0.0.1 00:57:17.779 eflags: none 00:57:17.779 sectype: none 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:17.779 nvme0n1 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:17.779 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: ]] 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.038 nvme0n1 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.038 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.297 nvme0n1 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:18.297 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.556 nvme0n1 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: ]] 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:18.556 03:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:57:18.557 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.557 03:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.816 nvme0n1 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:18.816 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:19.074 nvme0n1 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: ]] 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:19.074 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:19.332 nvme0n1 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:19.332 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:19.591 nvme0n1 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: ]] 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:19.591 03:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:19.850 nvme0n1 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: ]] 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:19.850 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:19.851 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:19.851 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:19.851 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:19.851 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:19.851 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:19.851 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:19.851 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:19.851 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:19.851 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:19.851 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:19.851 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:19.851 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:57:19.851 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:19.851 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:20.109 nvme0n1 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:20.109 nvme0n1 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:20.109 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: ]] 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:20.368 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:20.628 nvme0n1 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:20.628 03:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:20.887 nvme0n1 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: ]] 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:20.887 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:21.146 nvme0n1 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: ]] 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:21.146 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:21.405 nvme0n1 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:21.405 03:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:21.664 nvme0n1 00:57:21.664 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:21.664 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:21.664 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:21.664 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:21.664 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:21.664 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: ]] 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:21.924 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:22.183 nvme0n1 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:22.183 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:22.184 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:22.752 nvme0n1 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: ]] 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:22.752 03:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:22.752 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:22.752 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:22.752 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:22.752 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:22.752 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:22.752 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:22.752 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:22.752 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:22.752 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:22.752 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:22.752 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:22.752 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:22.752 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:22.752 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:22.752 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:23.010 nvme0n1 00:57:23.010 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:23.010 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:23.010 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:23.010 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:23.010 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:23.010 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:23.269 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:23.269 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:23.269 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:23.269 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:23.269 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:23.269 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:23.269 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:57:23.269 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:23.269 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:23.269 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:57:23.269 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: ]] 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:23.270 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:23.529 nvme0n1 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:23.529 03:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:24.097 nvme0n1 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: ]] 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:24.097 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:24.098 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:24.663 nvme0n1 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:24.663 03:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:24.663 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:24.663 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:24.663 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:24.664 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:24.664 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:24.664 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:24.664 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:24.664 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:24.664 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:24.664 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:24.664 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:24.664 03:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:24.664 03:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:25.230 nvme0n1 00:57:25.230 03:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:25.230 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:25.230 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:25.230 03:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:25.230 03:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:25.230 03:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:25.230 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:25.230 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:25.230 03:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:25.230 03:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: ]] 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:25.489 03:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:26.057 nvme0n1 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: ]] 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:57:26.057 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:26.058 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:26.625 nvme0n1 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:26.625 03:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:57:26.626 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:26.626 03:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:27.194 nvme0n1 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: ]] 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:27.194 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:27.453 nvme0n1 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:27.453 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:27.712 nvme0n1 00:57:27.712 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:27.712 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:27.712 03:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:27.712 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:27.712 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:27.712 03:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: ]] 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:27.712 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:27.713 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:27.713 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:27.972 nvme0n1 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: ]] 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:27.972 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:27.973 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:27.973 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:27.973 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:27.973 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:57:27.973 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:27.973 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:28.232 nvme0n1 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:28.232 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:28.233 nvme0n1 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:28.233 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: ]] 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:28.492 nvme0n1 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:28.492 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:28.751 03:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:28.751 nvme0n1 00:57:28.751 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:28.751 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:28.751 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:28.751 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:28.751 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:28.751 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:28.751 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:28.751 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:28.751 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:28.751 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: ]] 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:29.011 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:29.012 nvme0n1 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: ]] 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:29.012 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:29.271 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:57:29.271 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:29.271 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:29.271 nvme0n1 00:57:29.271 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:29.271 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:29.271 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:29.271 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:29.271 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:29.271 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:29.272 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:29.531 nvme0n1 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: ]] 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:29.531 03:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:29.790 nvme0n1 00:57:29.790 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:29.790 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:29.790 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:29.790 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:29.790 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:29.790 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:30.049 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:30.308 nvme0n1 00:57:30.308 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:30.308 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:30.308 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:30.308 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:30.308 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:30.308 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:30.308 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:30.308 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:30.308 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:30.308 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:30.308 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:30.308 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:30.308 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: ]] 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:30.309 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:30.567 nvme0n1 00:57:30.567 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:30.567 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:30.567 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:30.567 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:30.567 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:30.567 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:30.567 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:30.567 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:30.567 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:30.567 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:30.567 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:30.567 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:30.567 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: ]] 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:30.568 03:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:30.826 nvme0n1 00:57:30.826 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:30.826 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:30.826 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:30.826 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:30.827 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:31.085 nvme0n1 00:57:31.085 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:31.085 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:31.085 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:31.085 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:31.085 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:31.085 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:57:31.343 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: ]] 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:31.344 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:31.602 nvme0n1 00:57:31.602 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:31.602 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:31.602 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:31.602 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:31.602 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:31.602 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:31.602 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:31.602 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:31.602 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:31.602 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:31.602 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:31.602 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:31.602 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:57:31.602 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:31.602 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:31.603 03:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:32.170 nvme0n1 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: ]] 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:32.171 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:32.484 nvme0n1 00:57:32.484 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:32.484 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:32.484 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:32.484 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:32.484 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:32.484 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:32.484 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:32.484 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:32.484 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:32.484 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:32.742 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:32.742 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:32.742 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:57:32.742 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:32.742 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:32.742 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:57:32.742 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:57:32.742 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:32.742 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:32.742 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:32.742 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:57:32.742 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:32.742 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: ]] 00:57:32.742 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:32.743 03:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:33.001 nvme0n1 00:57:33.001 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:33.001 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:33.001 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:33.001 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:33.001 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:33.001 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:33.001 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:33.001 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:33.001 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:33.002 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:33.569 nvme0n1 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: ]] 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:33.569 03:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:34.137 nvme0n1 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:34.137 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:34.138 03:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:34.706 nvme0n1 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: ]] 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:34.706 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:35.274 nvme0n1 00:57:35.274 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:35.274 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:35.274 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:35.274 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:35.274 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:35.274 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: ]] 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:35.533 03:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:36.102 nvme0n1 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:36.102 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:36.103 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:57:36.103 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:36.103 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:36.671 nvme0n1 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: ]] 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:36.671 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:36.672 03:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:36.931 nvme0n1 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:36.931 nvme0n1 00:57:36.931 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: ]] 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.191 nvme0n1 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.191 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: ]] 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.451 nvme0n1 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.451 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.711 nvme0n1 00:57:37.711 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.711 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:37.711 03:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:37.711 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.711 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.711 03:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: ]] 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.711 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.970 nvme0n1 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:37.970 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:38.230 nvme0n1 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: ]] 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:38.230 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:38.489 nvme0n1 00:57:38.489 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:38.489 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:38.489 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:38.489 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:38.489 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:38.489 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:38.489 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:38.489 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:38.489 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: ]] 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:38.490 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:38.749 nvme0n1 00:57:38.749 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:38.749 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:38.749 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:38.750 03:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:38.750 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:38.750 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:38.750 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:38.750 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:38.750 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:38.750 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:38.750 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:38.750 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:38.750 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:38.750 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:38.750 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:38.750 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:38.750 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:57:38.750 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:38.750 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:39.009 nvme0n1 00:57:39.009 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:39.009 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:39.009 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:39.009 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:39.009 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:39.009 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:39.009 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:39.009 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:39.009 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:39.009 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:39.009 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:39.009 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:57:39.009 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: ]] 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:39.010 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:39.269 nvme0n1 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:39.269 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:39.270 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:39.529 nvme0n1 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: ]] 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:39.529 03:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:39.788 nvme0n1 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: ]] 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:39.788 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:40.047 nvme0n1 00:57:40.047 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:40.047 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:40.047 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:40.047 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:40.047 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:40.047 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:40.305 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:40.305 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:40.305 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:40.305 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:40.305 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:40.305 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:40.306 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:40.564 nvme0n1 00:57:40.564 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:40.564 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: ]] 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:40.565 03:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:40.823 nvme0n1 00:57:40.823 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:40.823 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:40.823 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:40.823 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:40.823 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:40.823 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:40.823 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:40.823 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:40.823 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:40.823 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:41.082 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:41.340 nvme0n1 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: ]] 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:41.340 03:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:41.908 nvme0n1 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:41.908 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: ]] 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:41.909 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:42.168 nvme0n1 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:42.168 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:42.169 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:42.169 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:42.169 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:42.169 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:42.169 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:42.169 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:42.428 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:42.428 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:42.428 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:42.428 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:42.428 03:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:42.428 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:57:42.428 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:42.428 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:42.687 nvme0n1 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTk3MDI4YWZjOGYyNDY4ZDQ2NzQzN2IwNTg0OTFkMmUasefz: 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: ]] 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Y2VhOWViODUwNTMxMjU4OTg1YmJkOWNmZGZiMDBlZGQ2OTBiZjk3MDhlODYyZTY3NmQxNGFiMjllMGIzNmRmYTUBBnY=: 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:42.687 03:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:42.687 03:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:42.687 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:42.687 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:42.687 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:42.687 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:42.687 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:42.687 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:42.687 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:42.687 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:42.687 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:42.687 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:42.687 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:42.687 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:57:42.687 03:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:42.687 03:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:43.255 nvme0n1 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:43.255 03:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:44.187 nvme0n1 00:57:44.187 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:44.187 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTA5NWFhMmVhYTg3MzkwYjU3ZmViODQyMmVmZDY3ZmFGOlCv: 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: ]] 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzhmOTFmMGNlYTdhYjg1M2E5MzVhYTBhMmU1NTgwMGbmw3GJ: 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:44.188 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:44.754 nvme0n1 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTdhZDkxZWMwNzAxNmEwYzg0ZTgzOGM2YzVlODc5MTU4ZDY4ODZlNzBiYzczYzBhgosAAg==: 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: ]] 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWYxNjBjZjcwNzkzOWQxODBlNWM4MTk4YTZhMDc0OGKXaCiB: 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:44.754 03:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:45.321 nvme0n1 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmFiNTM4YTAyOWYyMTYxYmI0OTg2NzVlMjE0NDk2YmM5Y2Y0ZDY0NWJmNDk5ODJiYzE2MmIxNTNlYTBkYjZjMNCWDCE=: 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:45.321 03:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:45.888 nvme0n1 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJmYzAyZDc1NzY3NGQzM2E0YjMxOWM2MGVmNWFmNTlhOWY1OGM0MTg5MThlYmIxxpJQkg==: 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: ]] 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y5OTAyYjM4YjVjZDE0MWMxMjMwMjgzMmM3OTdmZGE0YzYxZmFhYTJlZTAzMzBkw7Kdqg==: 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:45.888 request: 00:57:45.888 { 00:57:45.888 "name": "nvme0", 00:57:45.888 "trtype": "tcp", 00:57:45.888 "traddr": "10.0.0.1", 00:57:45.888 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:57:45.888 "adrfam": "ipv4", 00:57:45.888 "trsvcid": "4420", 00:57:45.888 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:57:45.888 "method": "bdev_nvme_attach_controller", 00:57:45.888 "req_id": 1 00:57:45.888 } 00:57:45.888 Got JSON-RPC error response 00:57:45.888 response: 00:57:45.888 { 00:57:45.888 "code": -5, 00:57:45.888 "message": "Input/output error" 00:57:45.888 } 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:45.888 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:46.147 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:57:46.147 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:57:46.147 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:46.147 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:46.147 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:46.147 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:46.147 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:46.147 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:46.147 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:46.147 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:46.148 request: 00:57:46.148 { 00:57:46.148 "name": "nvme0", 00:57:46.148 "trtype": "tcp", 00:57:46.148 "traddr": "10.0.0.1", 00:57:46.148 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:57:46.148 "adrfam": "ipv4", 00:57:46.148 "trsvcid": "4420", 00:57:46.148 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:57:46.148 "dhchap_key": "key2", 00:57:46.148 "method": "bdev_nvme_attach_controller", 00:57:46.148 "req_id": 1 00:57:46.148 } 00:57:46.148 Got JSON-RPC error response 00:57:46.148 response: 00:57:46.148 { 00:57:46.148 "code": -5, 00:57:46.148 "message": "Input/output error" 00:57:46.148 } 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:46.148 request: 00:57:46.148 { 00:57:46.148 "name": "nvme0", 00:57:46.148 "trtype": "tcp", 00:57:46.148 "traddr": "10.0.0.1", 00:57:46.148 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:57:46.148 "adrfam": "ipv4", 00:57:46.148 "trsvcid": "4420", 00:57:46.148 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:57:46.148 "dhchap_key": "key1", 00:57:46.148 "dhchap_ctrlr_key": "ckey2", 00:57:46.148 "method": "bdev_nvme_attach_controller", 00:57:46.148 "req_id": 1 00:57:46.148 } 00:57:46.148 Got JSON-RPC error response 00:57:46.148 response: 00:57:46.148 { 00:57:46.148 "code": -5, 00:57:46.148 "message": "Input/output error" 00:57:46.148 } 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:57:46.148 rmmod nvme_tcp 00:57:46.148 rmmod nvme_fabrics 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2383457 ']' 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2383457 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@949 -- # '[' -z 2383457 ']' 00:57:46.148 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # kill -0 2383457 00:57:46.407 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # uname 00:57:46.407 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:57:46.407 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2383457 00:57:46.407 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:57:46.407 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:57:46.407 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2383457' 00:57:46.407 killing process with pid 2383457 00:57:46.407 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@968 -- # kill 2383457 00:57:46.407 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@973 -- # wait 2383457 00:57:46.407 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:57:46.407 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:57:46.407 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:57:46.407 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:57:46.407 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:57:46.407 03:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:46.407 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:57:46.407 03:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:48.942 03:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:57:48.942 03:54:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:57:48.942 03:54:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:57:48.942 03:54:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:57:48.942 03:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:57:48.942 03:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:57:48.942 03:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:57:48.942 03:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:57:48.942 03:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:57:48.942 03:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:57:48.942 03:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:57:48.942 03:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:57:48.942 03:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:57:51.517 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:57:51.517 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:57:51.517 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:57:51.517 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:57:51.517 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:57:51.517 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:57:51.517 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:57:51.517 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:57:51.517 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:57:51.517 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:57:51.517 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:57:51.776 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:57:51.776 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:57:51.776 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:57:51.776 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:57:51.776 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:57:53.154 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:57:53.154 03:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.li0 /tmp/spdk.key-null.jbK /tmp/spdk.key-sha256.GGY /tmp/spdk.key-sha384.5Lb /tmp/spdk.key-sha512.2hG /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:57:53.154 03:54:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:57:56.448 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:57:56.448 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:57:56.448 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:57:56.448 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:57:56.448 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:57:56.448 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:57:56.448 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:57:56.448 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:57:56.448 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:57:56.448 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:57:56.448 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:57:56.448 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:57:56.448 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:57:56.448 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:57:56.448 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:57:56.448 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:57:56.448 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:57:56.448 00:57:56.448 real 0m50.553s 00:57:56.448 user 0m43.867s 00:57:56.448 sys 0m12.760s 00:57:56.448 03:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1125 -- # xtrace_disable 00:57:56.448 03:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:57:56.448 ************************************ 00:57:56.448 END TEST nvmf_auth_host 00:57:56.448 ************************************ 00:57:56.448 03:54:37 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:57:56.448 03:54:37 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:57:56.448 03:54:37 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:57:56.448 03:54:37 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:57:56.448 03:54:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:57:56.448 ************************************ 00:57:56.448 START TEST nvmf_digest 00:57:56.448 ************************************ 00:57:56.448 03:54:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:57:56.448 * Looking for test storage... 00:57:56.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:57:56.448 03:54:37 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:57:56.448 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:57:56.448 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:57:56.448 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:57:56.448 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:57:56.448 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:57:56.448 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:57:56.449 03:54:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:58:03.027 Found 0000:86:00.0 (0x8086 - 0x159b) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:58:03.027 Found 0000:86:00.1 (0x8086 - 0x159b) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:58:03.027 Found net devices under 0000:86:00.0: cvl_0_0 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:58:03.027 Found net devices under 0000:86:00.1: cvl_0_1 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:58:03.027 03:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:58:03.027 03:54:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:58:03.027 03:54:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:58:03.027 03:54:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:58:03.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:58:03.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:58:03.028 00:58:03.028 --- 10.0.0.2 ping statistics --- 00:58:03.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:03.028 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:58:03.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:58:03.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:58:03.028 00:58:03.028 --- 10.0.0.1 ping statistics --- 00:58:03.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:03.028 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:58:03.028 ************************************ 00:58:03.028 START TEST nvmf_digest_clean 00:58:03.028 ************************************ 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # run_digest 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@723 -- # xtrace_disable 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2397600 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2397600 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2397600 ']' 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:03.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:58:03.028 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:58:03.028 [2024-06-11 03:54:44.199096] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:58:03.028 [2024-06-11 03:54:44.199143] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:58:03.028 EAL: No free 2048 kB hugepages reported on node 1 00:58:03.028 [2024-06-11 03:54:44.264064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:03.028 [2024-06-11 03:54:44.305912] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:58:03.028 [2024-06-11 03:54:44.305951] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:58:03.028 [2024-06-11 03:54:44.305958] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:58:03.028 [2024-06-11 03:54:44.305964] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:58:03.028 [2024-06-11 03:54:44.305969] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:58:03.028 [2024-06-11 03:54:44.305985] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:58:03.596 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:58:03.596 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:58:03.596 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:58:03.596 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@729 -- # xtrace_disable 00:58:03.596 03:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:58:03.855 null0 00:58:03.855 [2024-06-11 03:54:45.100696] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:58:03.855 [2024-06-11 03:54:45.124851] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2397845 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2397845 /var/tmp/bperf.sock 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2397845 ']' 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:58:03.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:58:03.855 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:58:03.855 [2024-06-11 03:54:45.171211] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:58:03.855 [2024-06-11 03:54:45.171256] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397845 ] 00:58:03.855 EAL: No free 2048 kB hugepages reported on node 1 00:58:03.855 [2024-06-11 03:54:45.230155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:04.114 [2024-06-11 03:54:45.272926] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:58:04.114 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:58:04.114 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:58:04.114 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:58:04.114 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:58:04.114 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:58:04.372 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:58:04.372 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:58:04.630 nvme0n1 00:58:04.630 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:58:04.630 03:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:58:04.630 Running I/O for 2 seconds... 00:58:06.532 00:58:06.532 Latency(us) 00:58:06.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:06.532 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:58:06.532 nvme0n1 : 2.01 26800.46 104.69 0.00 0.00 4770.75 2137.72 17601.10 00:58:06.532 =================================================================================================================== 00:58:06.532 Total : 26800.46 104.69 0.00 0.00 4770.75 2137.72 17601.10 00:58:06.532 0 00:58:06.791 03:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:58:06.791 03:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:58:06.791 03:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:58:06.791 03:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:58:06.791 03:54:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:58:06.791 | select(.opcode=="crc32c") 00:58:06.791 | "\(.module_name) \(.executed)"' 00:58:06.791 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:58:06.791 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:58:06.791 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:58:06.791 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:58:06.791 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2397845 00:58:06.791 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2397845 ']' 00:58:06.791 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2397845 00:58:06.791 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:58:06.791 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:58:06.791 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2397845 00:58:06.791 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:58:06.791 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:58:06.792 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2397845' 00:58:06.792 killing process with pid 2397845 00:58:06.792 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2397845 00:58:06.792 Received shutdown signal, test time was about 2.000000 seconds 00:58:06.792 00:58:06.792 Latency(us) 00:58:06.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:06.792 =================================================================================================================== 00:58:06.792 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:58:06.792 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2397845 00:58:07.051 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:58:07.051 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:58:07.051 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:58:07.051 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:58:07.051 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:58:07.051 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:58:07.051 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:58:07.051 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2398316 00:58:07.051 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2398316 /var/tmp/bperf.sock 00:58:07.051 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:58:07.051 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2398316 ']' 00:58:07.051 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:58:07.051 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:58:07.051 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:58:07.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:58:07.051 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:58:07.051 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:58:07.051 [2024-06-11 03:54:48.380448] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:58:07.051 [2024-06-11 03:54:48.380498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398316 ] 00:58:07.051 I/O size of 131072 is greater than zero copy threshold (65536). 00:58:07.051 Zero copy mechanism will not be used. 00:58:07.051 EAL: No free 2048 kB hugepages reported on node 1 00:58:07.051 [2024-06-11 03:54:48.440312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:07.311 [2024-06-11 03:54:48.476712] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:58:07.311 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:58:07.311 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:58:07.311 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:58:07.311 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:58:07.311 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:58:07.570 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:58:07.570 03:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:58:07.829 nvme0n1 00:58:07.829 03:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:58:07.829 03:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:58:07.829 I/O size of 131072 is greater than zero copy threshold (65536). 00:58:07.829 Zero copy mechanism will not be used. 00:58:07.829 Running I/O for 2 seconds... 00:58:10.364 00:58:10.364 Latency(us) 00:58:10.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:10.364 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:58:10.364 nvme0n1 : 2.00 4722.59 590.32 0.00 0.00 3385.30 936.23 5929.45 00:58:10.365 =================================================================================================================== 00:58:10.365 Total : 4722.59 590.32 0.00 0.00 3385.30 936.23 5929.45 00:58:10.365 0 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:58:10.365 | select(.opcode=="crc32c") 00:58:10.365 | "\(.module_name) \(.executed)"' 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2398316 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2398316 ']' 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2398316 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2398316 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2398316' 00:58:10.365 killing process with pid 2398316 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2398316 00:58:10.365 Received shutdown signal, test time was about 2.000000 seconds 00:58:10.365 00:58:10.365 Latency(us) 00:58:10.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:10.365 =================================================================================================================== 00:58:10.365 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2398316 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2398786 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2398786 /var/tmp/bperf.sock 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2398786 ']' 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:58:10.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:58:10.365 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:58:10.365 [2024-06-11 03:54:51.659287] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:58:10.365 [2024-06-11 03:54:51.659336] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398786 ] 00:58:10.365 EAL: No free 2048 kB hugepages reported on node 1 00:58:10.365 [2024-06-11 03:54:51.719393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:10.365 [2024-06-11 03:54:51.757964] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:58:10.624 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:58:10.624 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:58:10.624 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:58:10.624 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:58:10.624 03:54:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:58:10.624 03:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:58:10.624 03:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:58:11.192 nvme0n1 00:58:11.192 03:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:58:11.192 03:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:58:11.192 Running I/O for 2 seconds... 00:58:13.094 00:58:13.094 Latency(us) 00:58:13.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:13.094 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:58:13.094 nvme0n1 : 2.00 27588.17 107.77 0.00 0.00 4631.73 4369.07 14043.43 00:58:13.094 =================================================================================================================== 00:58:13.094 Total : 27588.17 107.77 0.00 0.00 4631.73 4369.07 14043.43 00:58:13.094 0 00:58:13.353 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:58:13.353 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:58:13.353 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:58:13.353 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:58:13.353 | select(.opcode=="crc32c") 00:58:13.353 | "\(.module_name) \(.executed)"' 00:58:13.353 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:58:13.353 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:58:13.354 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:58:13.354 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:58:13.354 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:58:13.354 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2398786 00:58:13.354 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2398786 ']' 00:58:13.354 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2398786 00:58:13.354 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:58:13.354 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:58:13.354 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2398786 00:58:13.354 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:58:13.354 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:58:13.354 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2398786' 00:58:13.354 killing process with pid 2398786 00:58:13.354 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2398786 00:58:13.354 Received shutdown signal, test time was about 2.000000 seconds 00:58:13.354 00:58:13.354 Latency(us) 00:58:13.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:13.354 =================================================================================================================== 00:58:13.354 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:58:13.354 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2398786 00:58:13.613 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:58:13.613 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:58:13.613 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:58:13.613 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:58:13.613 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:58:13.613 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:58:13.613 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:58:13.613 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2399316 00:58:13.613 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2399316 /var/tmp/bperf.sock 00:58:13.613 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:58:13.613 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # '[' -z 2399316 ']' 00:58:13.613 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:58:13.613 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:58:13.613 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:58:13.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:58:13.613 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:58:13.613 03:54:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:58:13.613 [2024-06-11 03:54:54.946366] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:58:13.613 [2024-06-11 03:54:54.946412] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399316 ] 00:58:13.613 I/O size of 131072 is greater than zero copy threshold (65536). 00:58:13.613 Zero copy mechanism will not be used. 00:58:13.613 EAL: No free 2048 kB hugepages reported on node 1 00:58:13.613 [2024-06-11 03:54:55.005895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:13.873 [2024-06-11 03:54:55.046852] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:58:13.873 03:54:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:58:13.873 03:54:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@863 -- # return 0 00:58:13.873 03:54:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:58:13.873 03:54:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:58:13.873 03:54:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:58:14.132 03:54:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:58:14.132 03:54:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:58:14.391 nvme0n1 00:58:14.391 03:54:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:58:14.391 03:54:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:58:14.391 I/O size of 131072 is greater than zero copy threshold (65536). 00:58:14.391 Zero copy mechanism will not be used. 00:58:14.391 Running I/O for 2 seconds... 00:58:16.313 00:58:16.313 Latency(us) 00:58:16.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:16.313 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:58:16.313 nvme0n1 : 2.00 5460.53 682.57 0.00 0.00 2925.02 1989.49 15104.49 00:58:16.313 =================================================================================================================== 00:58:16.313 Total : 5460.53 682.57 0.00 0.00 2925.02 1989.49 15104.49 00:58:16.313 0 00:58:16.313 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:58:16.313 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:58:16.313 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:58:16.313 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:58:16.313 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:58:16.313 | select(.opcode=="crc32c") 00:58:16.313 | "\(.module_name) \(.executed)"' 00:58:16.617 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:58:16.617 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:58:16.617 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:58:16.617 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:58:16.617 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2399316 00:58:16.617 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2399316 ']' 00:58:16.617 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2399316 00:58:16.617 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:58:16.617 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:58:16.617 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2399316 00:58:16.617 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:58:16.617 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:58:16.617 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2399316' 00:58:16.617 killing process with pid 2399316 00:58:16.617 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2399316 00:58:16.617 Received shutdown signal, test time was about 2.000000 seconds 00:58:16.617 00:58:16.617 Latency(us) 00:58:16.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:16.617 =================================================================================================================== 00:58:16.617 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:58:16.617 03:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2399316 00:58:16.876 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2397600 00:58:16.876 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@949 -- # '[' -z 2397600 ']' 00:58:16.876 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # kill -0 2397600 00:58:16.876 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # uname 00:58:16.876 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:58:16.876 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2397600 00:58:16.876 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:58:16.876 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:58:16.876 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2397600' 00:58:16.876 killing process with pid 2397600 00:58:16.876 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # kill 2397600 00:58:16.876 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # wait 2397600 00:58:17.136 00:58:17.136 real 0m14.148s 00:58:17.136 user 0m26.202s 00:58:17.136 sys 0m4.365s 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:58:17.136 ************************************ 00:58:17.136 END TEST nvmf_digest_clean 00:58:17.136 ************************************ 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1106 -- # xtrace_disable 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:58:17.136 ************************************ 00:58:17.136 START TEST nvmf_digest_error 00:58:17.136 ************************************ 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # run_digest_error 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@723 -- # xtrace_disable 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2399977 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2399977 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2399977 ']' 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:17.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:58:17.136 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:17.136 [2024-06-11 03:54:58.415613] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:58:17.136 [2024-06-11 03:54:58.415653] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:58:17.136 EAL: No free 2048 kB hugepages reported on node 1 00:58:17.136 [2024-06-11 03:54:58.477153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:17.136 [2024-06-11 03:54:58.516927] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:58:17.136 [2024-06-11 03:54:58.516962] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:58:17.136 [2024-06-11 03:54:58.516969] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:58:17.136 [2024-06-11 03:54:58.516975] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:58:17.136 [2024-06-11 03:54:58.516980] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:58:17.136 [2024-06-11 03:54:58.516996] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@729 -- # xtrace_disable 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:17.396 [2024-06-11 03:54:58.585456] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:17.396 null0 00:58:17.396 [2024-06-11 03:54:58.670326] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:58:17.396 [2024-06-11 03:54:58.694475] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2400000 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2400000 /var/tmp/bperf.sock 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2400000 ']' 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:58:17.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:58:17.396 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:17.396 [2024-06-11 03:54:58.742132] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:58:17.396 [2024-06-11 03:54:58.742172] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400000 ] 00:58:17.396 EAL: No free 2048 kB hugepages reported on node 1 00:58:17.396 [2024-06-11 03:54:58.799779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:17.655 [2024-06-11 03:54:58.839308] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:58:17.655 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:58:17.655 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:58:17.655 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:58:17.655 03:54:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:58:17.914 03:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:58:17.914 03:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:17.914 03:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:17.914 03:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:17.914 03:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:58:17.914 03:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:58:18.173 nvme0n1 00:58:18.173 03:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:58:18.173 03:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:18.173 03:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:18.173 03:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:18.173 03:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:58:18.173 03:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:58:18.173 Running I/O for 2 seconds... 00:58:18.173 [2024-06-11 03:54:59.515518] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.173 [2024-06-11 03:54:59.515556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.173 [2024-06-11 03:54:59.515570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.173 [2024-06-11 03:54:59.525974] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.173 [2024-06-11 03:54:59.526002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.173 [2024-06-11 03:54:59.526021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.173 [2024-06-11 03:54:59.534886] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.173 [2024-06-11 03:54:59.534908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.173 [2024-06-11 03:54:59.534919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.173 [2024-06-11 03:54:59.543015] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.173 [2024-06-11 03:54:59.543038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.173 [2024-06-11 03:54:59.543050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.173 [2024-06-11 03:54:59.553092] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.173 [2024-06-11 03:54:59.553114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.173 [2024-06-11 03:54:59.553126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.173 [2024-06-11 03:54:59.561932] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.173 [2024-06-11 03:54:59.561955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.173 [2024-06-11 03:54:59.561967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.173 [2024-06-11 03:54:59.573399] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.173 [2024-06-11 03:54:59.573422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.173 [2024-06-11 03:54:59.573434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.584549] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.584572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.584583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.592378] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.592400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.592410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.602462] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.602483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.602494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.611779] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.611801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.611812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.619858] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.619878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.619889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.630225] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.630246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.630257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.642211] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.642232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.642244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.653521] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.653541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.653552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.661497] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.661517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.661528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.671870] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.671890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.671900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.683926] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.683947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.683962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.691874] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.691894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.691905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.702025] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.702046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.702056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.712046] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.712066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.712077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.719572] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.719592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.719603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.729249] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.729269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.729279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.739001] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.739024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.739035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.747319] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.433 [2024-06-11 03:54:59.747339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.433 [2024-06-11 03:54:59.747349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.433 [2024-06-11 03:54:59.756947] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.434 [2024-06-11 03:54:59.756969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.434 [2024-06-11 03:54:59.756981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.434 [2024-06-11 03:54:59.765659] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.434 [2024-06-11 03:54:59.765684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.434 [2024-06-11 03:54:59.765695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.434 [2024-06-11 03:54:59.775606] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.434 [2024-06-11 03:54:59.775627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.434 [2024-06-11 03:54:59.775638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.434 [2024-06-11 03:54:59.784815] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.434 [2024-06-11 03:54:59.784836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.434 [2024-06-11 03:54:59.784847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.434 [2024-06-11 03:54:59.793393] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.434 [2024-06-11 03:54:59.793415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.434 [2024-06-11 03:54:59.793425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.434 [2024-06-11 03:54:59.804980] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.434 [2024-06-11 03:54:59.805000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.434 [2024-06-11 03:54:59.805015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.434 [2024-06-11 03:54:59.814869] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.434 [2024-06-11 03:54:59.814889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.434 [2024-06-11 03:54:59.814900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.434 [2024-06-11 03:54:59.822967] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.434 [2024-06-11 03:54:59.822988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.434 [2024-06-11 03:54:59.822998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.434 [2024-06-11 03:54:59.832386] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.434 [2024-06-11 03:54:59.832407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.434 [2024-06-11 03:54:59.832418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.693 [2024-06-11 03:54:59.842647] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.693 [2024-06-11 03:54:59.842667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.693 [2024-06-11 03:54:59.842678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.693 [2024-06-11 03:54:59.851636] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.693 [2024-06-11 03:54:59.851655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.693 [2024-06-11 03:54:59.851666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.693 [2024-06-11 03:54:59.860036] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.693 [2024-06-11 03:54:59.860056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:54:59.860067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:54:59.870412] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:54:59.870432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:54:59.870443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:54:59.879322] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:54:59.879342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:54:59.879352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:54:59.887297] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:54:59.887317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:54:59.887328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:54:59.897652] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:54:59.897673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:54:59.897683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:54:59.906709] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:54:59.906729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:54:59.906740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:54:59.915114] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:54:59.915134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:54:59.915144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:54:59.926698] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:54:59.926721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:54:59.926732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:54:59.935813] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:54:59.935834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:54:59.935844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:54:59.945933] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:54:59.945953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:54:59.945963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:54:59.953969] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:54:59.953989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:54:59.954000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:54:59.964494] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:54:59.964515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:54:59.964526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:54:59.972385] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:54:59.972405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:54:59.972415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:54:59.983128] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:54:59.983147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:54:59.983157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:54:59.994726] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:54:59.994748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:54:59.994759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:55:00.007018] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:55:00.007044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:55:00.007056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:55:00.018017] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:55:00.018040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:55:00.018052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:55:00.028632] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:55:00.028655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:55:00.028667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:55:00.037712] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:55:00.037733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:55:00.037744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.717 [2024-06-11 03:55:00.047418] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.717 [2024-06-11 03:55:00.047438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.717 [2024-06-11 03:55:00.047449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.718 [2024-06-11 03:55:00.056109] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.718 [2024-06-11 03:55:00.056130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.718 [2024-06-11 03:55:00.056141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.718 [2024-06-11 03:55:00.068311] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.718 [2024-06-11 03:55:00.068333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.718 [2024-06-11 03:55:00.068345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.718 [2024-06-11 03:55:00.079693] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.718 [2024-06-11 03:55:00.079714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.718 [2024-06-11 03:55:00.079725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.718 [2024-06-11 03:55:00.088688] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.718 [2024-06-11 03:55:00.088708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.718 [2024-06-11 03:55:00.088720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.098670] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.098691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.098708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.109170] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.109191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.109202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.117672] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.117692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.117702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.127838] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.127858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.127869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.138555] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.138576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.138587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.147133] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.147153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.147163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.156292] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.156312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.156323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.167329] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.167349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.167360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.176164] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.176185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.176196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.187190] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.187215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.187226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.196108] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.196128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.196139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.205172] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.205193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.205203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.214237] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.214256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.214267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.223257] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.223287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.223297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.233302] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.233323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.233332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.242546] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.242566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.242577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.251628] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.251648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.251659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.261677] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.261698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.261708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.269373] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.978 [2024-06-11 03:55:00.269393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.978 [2024-06-11 03:55:00.269403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.978 [2024-06-11 03:55:00.279762] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.979 [2024-06-11 03:55:00.279782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.979 [2024-06-11 03:55:00.279794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.979 [2024-06-11 03:55:00.288932] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.979 [2024-06-11 03:55:00.288952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.979 [2024-06-11 03:55:00.288963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.979 [2024-06-11 03:55:00.298932] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.979 [2024-06-11 03:55:00.298952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.979 [2024-06-11 03:55:00.298963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.979 [2024-06-11 03:55:00.308119] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.979 [2024-06-11 03:55:00.308140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.979 [2024-06-11 03:55:00.308150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.979 [2024-06-11 03:55:00.317027] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.979 [2024-06-11 03:55:00.317048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.979 [2024-06-11 03:55:00.317060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.979 [2024-06-11 03:55:00.326693] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.979 [2024-06-11 03:55:00.326713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.979 [2024-06-11 03:55:00.326723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.979 [2024-06-11 03:55:00.338964] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.979 [2024-06-11 03:55:00.338985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.979 [2024-06-11 03:55:00.338995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.979 [2024-06-11 03:55:00.348552] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.979 [2024-06-11 03:55:00.348571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.979 [2024-06-11 03:55:00.348586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.979 [2024-06-11 03:55:00.356869] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.979 [2024-06-11 03:55:00.356889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.979 [2024-06-11 03:55:00.356900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.979 [2024-06-11 03:55:00.368193] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.979 [2024-06-11 03:55:00.368214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.979 [2024-06-11 03:55:00.368225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:18.979 [2024-06-11 03:55:00.380014] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:18.979 [2024-06-11 03:55:00.380034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:18.979 [2024-06-11 03:55:00.380045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.389651] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.389671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.389683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.397866] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.397886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.397896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.408114] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.408134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.408144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.416911] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.416931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.416942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.426517] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.426537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.426548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.435791] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.435812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.435822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.444581] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.444601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.444611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.453891] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.453911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.453922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.463475] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.463495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.463505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.471445] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.471465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.471475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.480596] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.480616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.480627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.490573] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.490593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.490604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.498412] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.498432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.498442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.508296] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.508316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.508332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.517732] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.517751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.517762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.526486] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.526507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.526517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.536082] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.536103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.536113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.545678] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.545699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.545710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.554184] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.554203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.554214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.563860] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.563880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.563891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.572573] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.572593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.572604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.580968] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.239 [2024-06-11 03:55:00.580988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.239 [2024-06-11 03:55:00.580998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.239 [2024-06-11 03:55:00.591280] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.240 [2024-06-11 03:55:00.591305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.240 [2024-06-11 03:55:00.591316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.240 [2024-06-11 03:55:00.600567] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.240 [2024-06-11 03:55:00.600588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.240 [2024-06-11 03:55:00.600599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.240 [2024-06-11 03:55:00.609383] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.240 [2024-06-11 03:55:00.609404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.240 [2024-06-11 03:55:00.609414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.240 [2024-06-11 03:55:00.618203] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.240 [2024-06-11 03:55:00.618225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.240 [2024-06-11 03:55:00.618235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.240 [2024-06-11 03:55:00.627269] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.240 [2024-06-11 03:55:00.627291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.240 [2024-06-11 03:55:00.627302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.240 [2024-06-11 03:55:00.636973] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.240 [2024-06-11 03:55:00.636994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.240 [2024-06-11 03:55:00.637005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.499 [2024-06-11 03:55:00.647386] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.499 [2024-06-11 03:55:00.647410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.499 [2024-06-11 03:55:00.647421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.499 [2024-06-11 03:55:00.655287] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.499 [2024-06-11 03:55:00.655309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.499 [2024-06-11 03:55:00.655320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.499 [2024-06-11 03:55:00.665267] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.499 [2024-06-11 03:55:00.665289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.499 [2024-06-11 03:55:00.665299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.499 [2024-06-11 03:55:00.675135] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.499 [2024-06-11 03:55:00.675156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.499 [2024-06-11 03:55:00.675167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.499 [2024-06-11 03:55:00.683196] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.499 [2024-06-11 03:55:00.683217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.499 [2024-06-11 03:55:00.683227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.499 [2024-06-11 03:55:00.695314] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.499 [2024-06-11 03:55:00.695335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.499 [2024-06-11 03:55:00.695346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.499 [2024-06-11 03:55:00.706141] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.499 [2024-06-11 03:55:00.706162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.499 [2024-06-11 03:55:00.706172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.499 [2024-06-11 03:55:00.714166] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.499 [2024-06-11 03:55:00.714187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.499 [2024-06-11 03:55:00.714198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.499 [2024-06-11 03:55:00.723904] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.499 [2024-06-11 03:55:00.723925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.499 [2024-06-11 03:55:00.723936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.499 [2024-06-11 03:55:00.733518] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.499 [2024-06-11 03:55:00.733539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.499 [2024-06-11 03:55:00.733550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.499 [2024-06-11 03:55:00.741373] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.499 [2024-06-11 03:55:00.741394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.499 [2024-06-11 03:55:00.741405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.499 [2024-06-11 03:55:00.750720] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.499 [2024-06-11 03:55:00.750741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.499 [2024-06-11 03:55:00.750756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.499 [2024-06-11 03:55:00.759906] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.499 [2024-06-11 03:55:00.759928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.499 [2024-06-11 03:55:00.759941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.500 [2024-06-11 03:55:00.768828] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.500 [2024-06-11 03:55:00.768851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.500 [2024-06-11 03:55:00.768862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.500 [2024-06-11 03:55:00.778760] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.500 [2024-06-11 03:55:00.778782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.500 [2024-06-11 03:55:00.778794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.500 [2024-06-11 03:55:00.787644] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.500 [2024-06-11 03:55:00.787665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.500 [2024-06-11 03:55:00.787675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.500 [2024-06-11 03:55:00.798076] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.500 [2024-06-11 03:55:00.798098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.500 [2024-06-11 03:55:00.798109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.500 [2024-06-11 03:55:00.806065] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.500 [2024-06-11 03:55:00.806086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.500 [2024-06-11 03:55:00.806097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.500 [2024-06-11 03:55:00.815430] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.500 [2024-06-11 03:55:00.815451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.500 [2024-06-11 03:55:00.815461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.500 [2024-06-11 03:55:00.824575] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.500 [2024-06-11 03:55:00.824596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.500 [2024-06-11 03:55:00.824607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.500 [2024-06-11 03:55:00.834138] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.500 [2024-06-11 03:55:00.834158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.500 [2024-06-11 03:55:00.834168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.500 [2024-06-11 03:55:00.843707] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.500 [2024-06-11 03:55:00.843726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.500 [2024-06-11 03:55:00.843737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.500 [2024-06-11 03:55:00.852404] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.500 [2024-06-11 03:55:00.852426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.500 [2024-06-11 03:55:00.852438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.500 [2024-06-11 03:55:00.863362] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.500 [2024-06-11 03:55:00.863383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.500 [2024-06-11 03:55:00.863394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.500 [2024-06-11 03:55:00.870724] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.500 [2024-06-11 03:55:00.870745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.500 [2024-06-11 03:55:00.870756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.500 [2024-06-11 03:55:00.880124] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.500 [2024-06-11 03:55:00.880145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.500 [2024-06-11 03:55:00.880155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.500 [2024-06-11 03:55:00.889535] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.500 [2024-06-11 03:55:00.889555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.500 [2024-06-11 03:55:00.889566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.500 [2024-06-11 03:55:00.898828] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.500 [2024-06-11 03:55:00.898849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.500 [2024-06-11 03:55:00.898860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.759 [2024-06-11 03:55:00.907529] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.759 [2024-06-11 03:55:00.907550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.759 [2024-06-11 03:55:00.907565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.759 [2024-06-11 03:55:00.919662] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.759 [2024-06-11 03:55:00.919683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:00.919693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:00.930537] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:00.930558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:00.930569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:00.939067] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:00.939088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:00.939098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:00.948580] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:00.948600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:00.948610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:00.958103] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:00.958123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:00.958133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:00.968147] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:00.968169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:00.968180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:00.976440] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:00.976460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:00.976470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:00.986313] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:00.986333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:00.986344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:00.996399] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:00.996424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:00.996435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.007212] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.007234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.007244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.016900] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.016922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.016931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.025357] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.025378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.025388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.033784] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.033804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.033814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.043963] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.043984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.043995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.052698] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.052721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.052732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.063181] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.063202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.063212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.073509] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.073529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.073540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.081303] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.081323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.081334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.092475] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.092495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.092506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.103914] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.103934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.103944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.112278] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.112298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.112309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.120341] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.120361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.120372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.130705] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.130726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.130736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.141154] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.141174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.141184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.149380] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.149399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.149410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:19.760 [2024-06-11 03:55:01.160542] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:19.760 [2024-06-11 03:55:01.160563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:19.760 [2024-06-11 03:55:01.160578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.020 [2024-06-11 03:55:01.172095] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.020 [2024-06-11 03:55:01.172116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.020 [2024-06-11 03:55:01.172126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.020 [2024-06-11 03:55:01.182563] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.020 [2024-06-11 03:55:01.182583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.020 [2024-06-11 03:55:01.182593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.020 [2024-06-11 03:55:01.191033] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.020 [2024-06-11 03:55:01.191053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.020 [2024-06-11 03:55:01.191064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.020 [2024-06-11 03:55:01.201409] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.020 [2024-06-11 03:55:01.201429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.020 [2024-06-11 03:55:01.201440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.020 [2024-06-11 03:55:01.210239] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.020 [2024-06-11 03:55:01.210258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.020 [2024-06-11 03:55:01.210269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.020 [2024-06-11 03:55:01.220837] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.020 [2024-06-11 03:55:01.220856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.020 [2024-06-11 03:55:01.220866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.020 [2024-06-11 03:55:01.229257] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.020 [2024-06-11 03:55:01.229277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.020 [2024-06-11 03:55:01.229287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.020 [2024-06-11 03:55:01.239518] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.020 [2024-06-11 03:55:01.239538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.020 [2024-06-11 03:55:01.239548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.020 [2024-06-11 03:55:01.247396] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.020 [2024-06-11 03:55:01.247420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.020 [2024-06-11 03:55:01.247430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.020 [2024-06-11 03:55:01.258795] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.020 [2024-06-11 03:55:01.258816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.020 [2024-06-11 03:55:01.258826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.020 [2024-06-11 03:55:01.268400] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.020 [2024-06-11 03:55:01.268420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.020 [2024-06-11 03:55:01.268430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.020 [2024-06-11 03:55:01.276644] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.021 [2024-06-11 03:55:01.276664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.021 [2024-06-11 03:55:01.276675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.021 [2024-06-11 03:55:01.286609] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.021 [2024-06-11 03:55:01.286629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.021 [2024-06-11 03:55:01.286640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.021 [2024-06-11 03:55:01.295691] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.021 [2024-06-11 03:55:01.295712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.021 [2024-06-11 03:55:01.295722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.021 [2024-06-11 03:55:01.303930] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.021 [2024-06-11 03:55:01.303949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.021 [2024-06-11 03:55:01.303961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.021 [2024-06-11 03:55:01.313367] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.021 [2024-06-11 03:55:01.313388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.021 [2024-06-11 03:55:01.313399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.021 [2024-06-11 03:55:01.323565] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.021 [2024-06-11 03:55:01.323585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.021 [2024-06-11 03:55:01.323601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.021 [2024-06-11 03:55:01.331416] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.021 [2024-06-11 03:55:01.331436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.021 [2024-06-11 03:55:01.331447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.021 [2024-06-11 03:55:01.341895] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.021 [2024-06-11 03:55:01.341915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.021 [2024-06-11 03:55:01.341926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.021 [2024-06-11 03:55:01.353624] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.021 [2024-06-11 03:55:01.353644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.021 [2024-06-11 03:55:01.353654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.021 [2024-06-11 03:55:01.361723] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.021 [2024-06-11 03:55:01.361743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.021 [2024-06-11 03:55:01.361754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.021 [2024-06-11 03:55:01.371500] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.021 [2024-06-11 03:55:01.371520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.021 [2024-06-11 03:55:01.371531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.021 [2024-06-11 03:55:01.381178] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.021 [2024-06-11 03:55:01.381197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.021 [2024-06-11 03:55:01.381208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.021 [2024-06-11 03:55:01.389726] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.021 [2024-06-11 03:55:01.389746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.021 [2024-06-11 03:55:01.389757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.021 [2024-06-11 03:55:01.399691] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.021 [2024-06-11 03:55:01.399711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.021 [2024-06-11 03:55:01.399721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.021 [2024-06-11 03:55:01.408526] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.021 [2024-06-11 03:55:01.408550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.021 [2024-06-11 03:55:01.408560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.021 [2024-06-11 03:55:01.418788] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.021 [2024-06-11 03:55:01.418808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.021 [2024-06-11 03:55:01.418819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.280 [2024-06-11 03:55:01.428114] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.280 [2024-06-11 03:55:01.428134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.280 [2024-06-11 03:55:01.428145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.280 [2024-06-11 03:55:01.436518] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.280 [2024-06-11 03:55:01.436538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.280 [2024-06-11 03:55:01.436548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.280 [2024-06-11 03:55:01.447638] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.280 [2024-06-11 03:55:01.447659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.280 [2024-06-11 03:55:01.447669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.280 [2024-06-11 03:55:01.456016] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.280 [2024-06-11 03:55:01.456037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.280 [2024-06-11 03:55:01.456047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.280 [2024-06-11 03:55:01.465385] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.280 [2024-06-11 03:55:01.465405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.280 [2024-06-11 03:55:01.465416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.280 [2024-06-11 03:55:01.474528] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.280 [2024-06-11 03:55:01.474549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.280 [2024-06-11 03:55:01.474560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.280 [2024-06-11 03:55:01.483689] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.280 [2024-06-11 03:55:01.483709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.280 [2024-06-11 03:55:01.483719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.280 [2024-06-11 03:55:01.492301] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.280 [2024-06-11 03:55:01.492321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.280 [2024-06-11 03:55:01.492331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.280 [2024-06-11 03:55:01.501812] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x226a270) 00:58:20.280 [2024-06-11 03:55:01.501832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:20.280 [2024-06-11 03:55:01.501843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:20.280 00:58:20.280 Latency(us) 00:58:20.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:20.280 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:58:20.280 nvme0n1 : 2.00 26629.57 104.02 0.00 0.00 4800.82 2122.12 16103.13 00:58:20.280 =================================================================================================================== 00:58:20.280 Total : 26629.57 104.02 0.00 0.00 4800.82 2122.12 16103.13 00:58:20.280 0 00:58:20.280 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:58:20.280 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:58:20.280 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:58:20.280 | .driver_specific 00:58:20.281 | .nvme_error 00:58:20.281 | .status_code 00:58:20.281 | .command_transient_transport_error' 00:58:20.281 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 209 > 0 )) 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2400000 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2400000 ']' 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2400000 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2400000 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2400000' 00:58:20.546 killing process with pid 2400000 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2400000 00:58:20.546 Received shutdown signal, test time was about 2.000000 seconds 00:58:20.546 00:58:20.546 Latency(us) 00:58:20.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:20.546 =================================================================================================================== 00:58:20.546 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2400000 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2400474 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2400474 /var/tmp/bperf.sock 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2400474 ']' 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:58:20.546 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:58:20.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:58:20.547 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:58:20.547 03:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:20.807 [2024-06-11 03:55:01.957032] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:58:20.807 [2024-06-11 03:55:01.957075] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400474 ] 00:58:20.807 I/O size of 131072 is greater than zero copy threshold (65536). 00:58:20.807 Zero copy mechanism will not be used. 00:58:20.807 EAL: No free 2048 kB hugepages reported on node 1 00:58:20.807 [2024-06-11 03:55:02.014727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:20.807 [2024-06-11 03:55:02.055166] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:58:20.807 03:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:58:20.807 03:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:58:20.807 03:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:58:20.807 03:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:58:21.065 03:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:58:21.065 03:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:21.065 03:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:21.065 03:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:21.065 03:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:58:21.065 03:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:58:21.323 nvme0n1 00:58:21.323 03:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:58:21.323 03:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:21.323 03:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:21.323 03:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:21.323 03:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:58:21.323 03:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:58:21.582 I/O size of 131072 is greater than zero copy threshold (65536). 00:58:21.582 Zero copy mechanism will not be used. 00:58:21.582 Running I/O for 2 seconds... 00:58:21.582 [2024-06-11 03:55:02.764296] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.764327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.764339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.772831] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.772859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.772871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.780675] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.780697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.780708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.787944] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.787965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.787976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.794934] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.794954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.794965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.801378] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.801399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.801410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.807714] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.807735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.807745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.815256] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.815278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.815294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.823136] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.823161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.823172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.830213] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.830234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.830245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.837309] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.837330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.837341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.843465] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.843486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.843497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.849468] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.849489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.849499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.855333] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.855353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.855363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.861184] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.861204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.861214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.866979] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.866999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.867017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.872683] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.872704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.872714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.878368] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.582 [2024-06-11 03:55:02.878388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.582 [2024-06-11 03:55:02.878399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.582 [2024-06-11 03:55:02.884038] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.884058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.884068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.889619] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.889639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.889649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.895149] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.895168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.895178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.900822] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.900842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.900852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.906510] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.906531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.906541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.912033] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.912053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.912063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.917562] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.917582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.917597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.923056] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.923077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.923087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.928468] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.928488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.928499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.933846] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.933867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.933877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.939222] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.939243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.939253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.944599] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.944619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.944630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.950042] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.950061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.950071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.955455] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.955475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.955486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.960982] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.961002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.961027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.966581] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.966605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.966617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.972013] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.972033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.972044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.977437] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.977457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.977468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.583 [2024-06-11 03:55:02.982902] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.583 [2024-06-11 03:55:02.982923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.583 [2024-06-11 03:55:02.982933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.842 [2024-06-11 03:55:02.988457] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.842 [2024-06-11 03:55:02.988477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.842 [2024-06-11 03:55:02.988487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.842 [2024-06-11 03:55:02.994137] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.842 [2024-06-11 03:55:02.994157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.842 [2024-06-11 03:55:02.994167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.842 [2024-06-11 03:55:02.999570] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.842 [2024-06-11 03:55:02.999590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.842 [2024-06-11 03:55:02.999600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.842 [2024-06-11 03:55:03.004940] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.842 [2024-06-11 03:55:03.004960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.842 [2024-06-11 03:55:03.004971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.842 [2024-06-11 03:55:03.010388] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.842 [2024-06-11 03:55:03.010408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.842 [2024-06-11 03:55:03.010418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.842 [2024-06-11 03:55:03.015939] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.015959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.015969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.021452] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.021473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.021483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.026968] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.026988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.026999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.032389] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.032413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.032423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.037847] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.037868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.037879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.043643] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.043663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.043674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.050692] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.050713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.050723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.059481] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.059503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.059513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.067253] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.067277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.067288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.074564] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.074584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.074594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.081828] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.081848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.081858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.088390] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.088409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.088419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.094646] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.094666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.094676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.101968] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.101989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.102000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.110806] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.110827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.110837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.118957] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.118978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.118989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.126207] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.126227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.126237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.133301] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.133321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.133332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.140275] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.140296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.140306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.147099] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.147119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.147129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.153097] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.153117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.153128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.158996] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.159023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.159034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.164607] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.164627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.164638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.170499] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.170519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.170530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.177361] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.177383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.177393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.185650] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.185671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.185686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.193451] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.843 [2024-06-11 03:55:03.193473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.843 [2024-06-11 03:55:03.193483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.843 [2024-06-11 03:55:03.202505] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.844 [2024-06-11 03:55:03.202527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.844 [2024-06-11 03:55:03.202538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:21.844 [2024-06-11 03:55:03.211338] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.844 [2024-06-11 03:55:03.211359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.844 [2024-06-11 03:55:03.211370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:21.844 [2024-06-11 03:55:03.221327] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.844 [2024-06-11 03:55:03.221348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.844 [2024-06-11 03:55:03.221360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:21.844 [2024-06-11 03:55:03.230168] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.844 [2024-06-11 03:55:03.230189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.844 [2024-06-11 03:55:03.230200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:21.844 [2024-06-11 03:55:03.240423] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:21.844 [2024-06-11 03:55:03.240444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:21.844 [2024-06-11 03:55:03.240455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.103 [2024-06-11 03:55:03.249267] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.103 [2024-06-11 03:55:03.249289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.103 [2024-06-11 03:55:03.249299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.103 [2024-06-11 03:55:03.259066] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.103 [2024-06-11 03:55:03.259087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.103 [2024-06-11 03:55:03.259098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.103 [2024-06-11 03:55:03.269092] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.103 [2024-06-11 03:55:03.269117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.103 [2024-06-11 03:55:03.269128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.103 [2024-06-11 03:55:03.279514] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.103 [2024-06-11 03:55:03.279535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.103 [2024-06-11 03:55:03.279546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.103 [2024-06-11 03:55:03.289189] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.103 [2024-06-11 03:55:03.289210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.103 [2024-06-11 03:55:03.289221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.103 [2024-06-11 03:55:03.297614] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.103 [2024-06-11 03:55:03.297634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.103 [2024-06-11 03:55:03.297644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.103 [2024-06-11 03:55:03.305304] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.103 [2024-06-11 03:55:03.305325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.103 [2024-06-11 03:55:03.305336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.103 [2024-06-11 03:55:03.312585] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.103 [2024-06-11 03:55:03.312606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.103 [2024-06-11 03:55:03.312617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.103 [2024-06-11 03:55:03.319711] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.103 [2024-06-11 03:55:03.319731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.319741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.328434] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.328454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.328464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.336371] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.336391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.336401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.343850] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.343870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.343880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.351101] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.351121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.351131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.357750] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.357769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.357780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.364452] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.364472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.364482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.370888] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.370908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.370919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.379561] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.379581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.379591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.387508] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.387528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.387538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.395180] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.395199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.395209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.402878] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.402901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.402911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.409327] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.409346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.409356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.416657] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.416677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.416687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.422776] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.422796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.422806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.428710] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.428730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.428740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.435935] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.435955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.435965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.444538] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.444558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.444569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.452555] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.452574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.452584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.460435] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.460456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.460466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.468847] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.468868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.468878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.476687] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.476708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.476718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.484362] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.484383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.484393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.492210] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.492230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.492241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.498163] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.498184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.498195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.104 [2024-06-11 03:55:03.505272] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.104 [2024-06-11 03:55:03.505294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.104 [2024-06-11 03:55:03.505305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.364 [2024-06-11 03:55:03.511628] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.364 [2024-06-11 03:55:03.511651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.364 [2024-06-11 03:55:03.511661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.364 [2024-06-11 03:55:03.518322] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.364 [2024-06-11 03:55:03.518344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.364 [2024-06-11 03:55:03.518355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.364 [2024-06-11 03:55:03.524768] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.364 [2024-06-11 03:55:03.524790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.364 [2024-06-11 03:55:03.524805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.364 [2024-06-11 03:55:03.530966] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.364 [2024-06-11 03:55:03.530988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.364 [2024-06-11 03:55:03.530999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.364 [2024-06-11 03:55:03.537081] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.364 [2024-06-11 03:55:03.537103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.364 [2024-06-11 03:55:03.537114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.542941] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.542963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.542973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.548947] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.548969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.548980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.554905] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.554927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.554938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.560842] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.560866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.560876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.566703] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.566726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.566737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.572649] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.572671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.572683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.577640] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.577667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.577678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.583190] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.583212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.583223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.588612] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.588634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.588644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.594018] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.594055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.594066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.599491] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.599512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.599522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.604982] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.605004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.605022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.610276] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.610298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.610308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.615887] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.615909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.615919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.621489] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.621510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.621521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.627046] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.627066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.627076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.632349] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.632371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.632381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.637780] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.637802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.637812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.643736] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.643758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.643768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.649889] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.649912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.649921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.656677] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.656699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.656710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.663520] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.663542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.663553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.671704] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.671726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.671736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.680271] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.680293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.680308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.688915] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.688938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.688948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.697410] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.697434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.697445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.706092] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.365 [2024-06-11 03:55:03.706115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.365 [2024-06-11 03:55:03.706126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.365 [2024-06-11 03:55:03.715887] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.366 [2024-06-11 03:55:03.715909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.366 [2024-06-11 03:55:03.715920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.366 [2024-06-11 03:55:03.725116] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.366 [2024-06-11 03:55:03.725139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.366 [2024-06-11 03:55:03.725149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.366 [2024-06-11 03:55:03.733453] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.366 [2024-06-11 03:55:03.733476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.366 [2024-06-11 03:55:03.733486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.366 [2024-06-11 03:55:03.742641] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.366 [2024-06-11 03:55:03.742664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.366 [2024-06-11 03:55:03.742674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.366 [2024-06-11 03:55:03.751974] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.366 [2024-06-11 03:55:03.751996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.366 [2024-06-11 03:55:03.752006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.366 [2024-06-11 03:55:03.760457] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.366 [2024-06-11 03:55:03.760478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.366 [2024-06-11 03:55:03.760489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.769210] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.769234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.769245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.778505] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.778527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.778538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.788338] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.788361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.788372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.797696] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.797718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.797729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.806454] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.806477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.806487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.814915] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.814937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.814947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.822033] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.822055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.822066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.829972] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.829994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.830015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.837636] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.837658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.837668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.844770] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.844793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.844803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.851727] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.851751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.851762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.858931] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.858953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.858964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.865734] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.865756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.865767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.872455] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.872477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.872487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.879511] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.879533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.879544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.887375] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.887398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.887408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.895140] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.895166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.895177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.903193] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.903215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.903225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.911457] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.911478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.911489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.919414] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.919436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.919446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.626 [2024-06-11 03:55:03.927937] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.626 [2024-06-11 03:55:03.927959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.626 [2024-06-11 03:55:03.927970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.627 [2024-06-11 03:55:03.935232] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.627 [2024-06-11 03:55:03.935255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.627 [2024-06-11 03:55:03.935265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.627 [2024-06-11 03:55:03.942789] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.627 [2024-06-11 03:55:03.942811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.627 [2024-06-11 03:55:03.942821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.627 [2024-06-11 03:55:03.949573] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.627 [2024-06-11 03:55:03.949594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.627 [2024-06-11 03:55:03.949604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.627 [2024-06-11 03:55:03.955895] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.627 [2024-06-11 03:55:03.955916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.627 [2024-06-11 03:55:03.955926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.627 [2024-06-11 03:55:03.962363] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.627 [2024-06-11 03:55:03.962384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.627 [2024-06-11 03:55:03.962395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.627 [2024-06-11 03:55:03.968314] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.627 [2024-06-11 03:55:03.968337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.627 [2024-06-11 03:55:03.968347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.627 [2024-06-11 03:55:03.974157] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.627 [2024-06-11 03:55:03.974178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.627 [2024-06-11 03:55:03.974189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.627 [2024-06-11 03:55:03.980098] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.627 [2024-06-11 03:55:03.980119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.627 [2024-06-11 03:55:03.980129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.627 [2024-06-11 03:55:03.986419] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.627 [2024-06-11 03:55:03.986440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.627 [2024-06-11 03:55:03.986451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.627 [2024-06-11 03:55:03.992280] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.627 [2024-06-11 03:55:03.992301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.627 [2024-06-11 03:55:03.992311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.627 [2024-06-11 03:55:03.998183] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.627 [2024-06-11 03:55:03.998204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.627 [2024-06-11 03:55:03.998214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.627 [2024-06-11 03:55:04.003727] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.627 [2024-06-11 03:55:04.003747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.627 [2024-06-11 03:55:04.003758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.627 [2024-06-11 03:55:04.009479] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.627 [2024-06-11 03:55:04.009500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.627 [2024-06-11 03:55:04.009516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.627 [2024-06-11 03:55:04.018442] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.627 [2024-06-11 03:55:04.018463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.627 [2024-06-11 03:55:04.018473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.627 [2024-06-11 03:55:04.026908] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.627 [2024-06-11 03:55:04.026929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.627 [2024-06-11 03:55:04.026939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.887 [2024-06-11 03:55:04.034676] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.887 [2024-06-11 03:55:04.034697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.887 [2024-06-11 03:55:04.034707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.887 [2024-06-11 03:55:04.042164] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.887 [2024-06-11 03:55:04.042186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.887 [2024-06-11 03:55:04.042196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.887 [2024-06-11 03:55:04.048738] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.887 [2024-06-11 03:55:04.048760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.887 [2024-06-11 03:55:04.048770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.887 [2024-06-11 03:55:04.055209] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.887 [2024-06-11 03:55:04.055231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.887 [2024-06-11 03:55:04.055241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.887 [2024-06-11 03:55:04.061367] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.887 [2024-06-11 03:55:04.061389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.887 [2024-06-11 03:55:04.061399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.887 [2024-06-11 03:55:04.067415] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.067436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.067446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.073378] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.073404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.073414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.079424] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.079446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.079458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.085241] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.085263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.085273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.090863] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.090884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.090895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.096588] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.096609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.096618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.102337] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.102357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.102368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.107909] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.107929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.107939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.113409] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.113430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.113440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.118900] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.118921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.118931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.124581] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.124602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.124613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.130198] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.130219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.130230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.135606] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.135626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.135636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.141019] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.141040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.141050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.146413] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.146434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.146444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.151872] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.151892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.151903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.157390] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.157411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.157422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.162900] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.162921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.162931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.168386] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.168407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.888 [2024-06-11 03:55:04.168422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.888 [2024-06-11 03:55:04.173849] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.888 [2024-06-11 03:55:04.173870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.173880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.179343] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.179364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.179374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.184804] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.184825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.184835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.190340] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.190360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.190370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.195873] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.195894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.195904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.201314] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.201334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.201344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.206795] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.206817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.206827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.212224] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.212244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.212254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.217781] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.217801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.217811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.223389] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.223410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.223420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.228860] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.228880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.228890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.234304] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.234325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.234335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.239653] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.239674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.239683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.245004] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.245030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.245041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.250502] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.250523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.250533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.256037] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.256057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.256067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.261584] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.261604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.261619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.267104] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.267125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.267135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.272592] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.272612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.272623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.278047] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.278067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.278077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.283529] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.889 [2024-06-11 03:55:04.283550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.889 [2024-06-11 03:55:04.283560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:22.889 [2024-06-11 03:55:04.289147] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:22.890 [2024-06-11 03:55:04.289166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:22.890 [2024-06-11 03:55:04.289177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.294815] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.294837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.294847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.300324] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.300345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.300356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.305856] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.305877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.305887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.311494] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.311518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.311528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.317198] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.317218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.317229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.323020] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.323040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.323050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.328731] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.328752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.328761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.334329] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.334350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.334360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.340035] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.340055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.340065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.345691] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.345713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.345723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.351144] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.351165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.351174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.356590] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.356610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.356620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.362129] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.362150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.362160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.367612] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.367633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.367643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.373111] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.373132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.373142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.378508] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.378529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.378540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.383866] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.383887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.383897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.389288] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.389308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.389318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.394703] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.394724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.394734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.400208] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.400229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.400239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.405804] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.405825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.405841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.411267] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.411288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.411299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.416658] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.416679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.416689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.421976] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.421998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.422008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.427300] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.427322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.427331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.432677] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.148 [2024-06-11 03:55:04.432699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.148 [2024-06-11 03:55:04.432709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.148 [2024-06-11 03:55:04.438168] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.438189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.438199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.443653] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.443674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.443684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.449204] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.449225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.449235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.454644] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.454667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.454678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.460153] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.460174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.460184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.465652] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.465673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.465683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.471346] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.471368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.471377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.477021] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.477042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.477052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.482502] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.482523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.482532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.487999] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.488026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.488036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.493538] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.493559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.493569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.499128] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.499148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.499158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.504803] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.504824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.504834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.510380] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.510400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.510411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.516955] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.516977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.516987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.522616] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.522637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.522647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.528158] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.528178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.528188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.533606] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.533626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.533637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.540259] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.540280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.540291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.149 [2024-06-11 03:55:04.546555] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.149 [2024-06-11 03:55:04.546577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.149 [2024-06-11 03:55:04.546587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.553824] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.553846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.553861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.561644] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.561666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.561676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.569331] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.569353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.569363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.577224] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.577246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.577267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.584458] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.584480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.584490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.591438] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.591459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.591469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.597909] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.597930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.597940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.603935] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.603955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.603966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.609940] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.609962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.609972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.615996] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.616022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.616033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.622650] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.622672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.622682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.629545] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.629567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.629578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.636308] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.636330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.636341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.643185] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.643207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.643217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.649794] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.649815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.649826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.655795] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.655817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.655829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.661850] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.661871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.661881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.667714] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.667735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.667750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.673632] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.673653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.673664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.679425] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.679446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.679456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.685131] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.685152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.685162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.690895] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.690916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.690926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.694821] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.694841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.694852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.699242] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.699262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.699273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.704274] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.704295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.704321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.709530] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.709551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.709561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.714787] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.714815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.714825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.719966] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.719987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.719997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.725202] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.725222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.725232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.730502] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.730523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.730533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.735877] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.735898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.735908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.740953] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.740974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.740984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.746227] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.746249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.746260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.751160] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.751181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.751191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.756347] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.756369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.756379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:23.408 [2024-06-11 03:55:04.761573] nvme_tcp.c:1454:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23aec30) 00:58:23.408 [2024-06-11 03:55:04.761594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:23.408 [2024-06-11 03:55:04.761604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:23.408 00:58:23.408 Latency(us) 00:58:23.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:23.408 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:58:23.408 nvme0n1 : 2.00 4806.05 600.76 0.00 0.00 3325.98 643.66 10860.25 00:58:23.408 =================================================================================================================== 00:58:23.408 Total : 4806.05 600.76 0.00 0.00 3325.98 643.66 10860.25 00:58:23.408 0 00:58:23.408 03:55:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:58:23.408 03:55:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:58:23.408 03:55:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:58:23.408 | .driver_specific 00:58:23.408 | .nvme_error 00:58:23.408 | .status_code 00:58:23.408 | .command_transient_transport_error' 00:58:23.409 03:55:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:58:23.666 03:55:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 310 > 0 )) 00:58:23.666 03:55:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2400474 00:58:23.666 03:55:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2400474 ']' 00:58:23.666 03:55:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2400474 00:58:23.666 03:55:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:58:23.666 03:55:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:58:23.666 03:55:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2400474 00:58:23.666 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:58:23.666 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:58:23.666 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2400474' 00:58:23.666 killing process with pid 2400474 00:58:23.666 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2400474 00:58:23.666 Received shutdown signal, test time was about 2.000000 seconds 00:58:23.666 00:58:23.666 Latency(us) 00:58:23.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:23.666 =================================================================================================================== 00:58:23.666 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:58:23.666 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2400474 00:58:23.924 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:58:23.924 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:58:23.924 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:58:23.924 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:58:23.924 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:58:23.924 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2401044 00:58:23.924 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2401044 /var/tmp/bperf.sock 00:58:23.924 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:58:23.924 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2401044 ']' 00:58:23.924 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:58:23.924 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:58:23.924 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:58:23.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:58:23.924 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:58:23.924 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:23.924 [2024-06-11 03:55:05.225652] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:58:23.924 [2024-06-11 03:55:05.225700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2401044 ] 00:58:23.924 EAL: No free 2048 kB hugepages reported on node 1 00:58:23.924 [2024-06-11 03:55:05.284836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:23.924 [2024-06-11 03:55:05.325169] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:58:24.183 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:58:24.183 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:58:24.183 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:58:24.183 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:58:24.183 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:58:24.183 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:24.183 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:24.183 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:24.183 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:58:24.183 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:58:24.751 nvme0n1 00:58:24.751 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:58:24.751 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:24.751 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:24.751 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:24.751 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:58:24.751 03:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:58:24.751 Running I/O for 2 seconds... 00:58:24.751 [2024-06-11 03:55:06.008715] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fa7d8 00:58:24.751 [2024-06-11 03:55:06.009398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.751 [2024-06-11 03:55:06.009424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:24.751 [2024-06-11 03:55:06.018043] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f4f40 00:58:24.751 [2024-06-11 03:55:06.018744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.751 [2024-06-11 03:55:06.018764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:24.751 [2024-06-11 03:55:06.029035] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f0788 00:58:24.751 [2024-06-11 03:55:06.030233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.751 [2024-06-11 03:55:06.030253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:24.751 [2024-06-11 03:55:06.037531] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f0350 00:58:24.751 [2024-06-11 03:55:06.038610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.751 [2024-06-11 03:55:06.038630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:24.751 [2024-06-11 03:55:06.047545] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e4140 00:58:24.751 [2024-06-11 03:55:06.048986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.751 [2024-06-11 03:55:06.049005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:24.751 [2024-06-11 03:55:06.056989] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fda78 00:58:24.751 [2024-06-11 03:55:06.058538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.752 [2024-06-11 03:55:06.058557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:24.752 [2024-06-11 03:55:06.063310] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f57b0 00:58:24.752 [2024-06-11 03:55:06.064028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.752 [2024-06-11 03:55:06.064046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:24.752 [2024-06-11 03:55:06.072370] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f6890 00:58:24.752 [2024-06-11 03:55:06.073122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.752 [2024-06-11 03:55:06.073140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:24.752 [2024-06-11 03:55:06.081302] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190eee38 00:58:24.752 [2024-06-11 03:55:06.082028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.752 [2024-06-11 03:55:06.082047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:24.752 [2024-06-11 03:55:06.090212] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f0bc0 00:58:24.752 [2024-06-11 03:55:06.090936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.752 [2024-06-11 03:55:06.090954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:24.752 [2024-06-11 03:55:06.099114] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e2c28 00:58:24.752 [2024-06-11 03:55:06.099835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.752 [2024-06-11 03:55:06.099853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:24.752 [2024-06-11 03:55:06.108031] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e3d08 00:58:24.752 [2024-06-11 03:55:06.108778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.752 [2024-06-11 03:55:06.108797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:24.752 [2024-06-11 03:55:06.116913] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fa3a0 00:58:24.752 [2024-06-11 03:55:06.117625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.752 [2024-06-11 03:55:06.117643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:24.752 [2024-06-11 03:55:06.125807] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f2948 00:58:24.752 [2024-06-11 03:55:06.126517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.752 [2024-06-11 03:55:06.126535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:24.752 [2024-06-11 03:55:06.134706] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f3a28 00:58:24.752 [2024-06-11 03:55:06.135426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.752 [2024-06-11 03:55:06.135444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:24.752 [2024-06-11 03:55:06.143574] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f8618 00:58:24.752 [2024-06-11 03:55:06.144289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.752 [2024-06-11 03:55:06.144307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:24.752 [2024-06-11 03:55:06.152533] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f96f8 00:58:24.752 [2024-06-11 03:55:06.153287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:24.752 [2024-06-11 03:55:06.153306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.011 [2024-06-11 03:55:06.161665] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e0a68 00:58:25.011 [2024-06-11 03:55:06.162419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.011 [2024-06-11 03:55:06.162441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.011 [2024-06-11 03:55:06.170568] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e1b48 00:58:25.011 [2024-06-11 03:55:06.171281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.011 [2024-06-11 03:55:06.171299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.011 [2024-06-11 03:55:06.179496] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f7970 00:58:25.011 [2024-06-11 03:55:06.180224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.011 [2024-06-11 03:55:06.180242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.011 [2024-06-11 03:55:06.188357] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f1430 00:58:25.012 [2024-06-11 03:55:06.189112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.189130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.197226] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190efae0 00:58:25.012 [2024-06-11 03:55:06.197949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.197967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.206149] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f5378 00:58:25.012 [2024-06-11 03:55:06.206890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.206908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.215038] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f6458 00:58:25.012 [2024-06-11 03:55:06.215758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.215776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.223821] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ef270 00:58:25.012 [2024-06-11 03:55:06.224534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.224552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.232734] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ee190 00:58:25.012 [2024-06-11 03:55:06.233450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.233467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.241635] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f0788 00:58:25.012 [2024-06-11 03:55:06.242371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.242390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.250543] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e3060 00:58:25.012 [2024-06-11 03:55:06.251268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.251286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.259418] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e4140 00:58:25.012 [2024-06-11 03:55:06.260155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.260173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.268544] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fd208 00:58:25.012 [2024-06-11 03:55:06.269274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.269293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.277465] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f2d80 00:58:25.012 [2024-06-11 03:55:06.278184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.278201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.286324] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f3e60 00:58:25.012 [2024-06-11 03:55:06.287056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.287074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.295274] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f8a50 00:58:25.012 [2024-06-11 03:55:06.295999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.296020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.304196] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f9b30 00:58:25.012 [2024-06-11 03:55:06.304901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.304919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.313089] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e0ea0 00:58:25.012 [2024-06-11 03:55:06.313797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.313815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.321990] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e1f80 00:58:25.012 [2024-06-11 03:55:06.322733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.322751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.330871] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f20d8 00:58:25.012 [2024-06-11 03:55:06.331612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.331630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.339714] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fac10 00:58:25.012 [2024-06-11 03:55:06.340479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.340498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.348647] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f46d0 00:58:25.012 [2024-06-11 03:55:06.349382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.349400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.357538] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f57b0 00:58:25.012 [2024-06-11 03:55:06.358245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.358263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.366434] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f6890 00:58:25.012 [2024-06-11 03:55:06.367139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.367157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.375352] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190eee38 00:58:25.012 [2024-06-11 03:55:06.376060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.376078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.384239] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f0bc0 00:58:25.012 [2024-06-11 03:55:06.384942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.384960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.393141] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e2c28 00:58:25.012 [2024-06-11 03:55:06.393896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.393917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.402038] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e3d08 00:58:25.012 [2024-06-11 03:55:06.402783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.402801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.012 [2024-06-11 03:55:06.410914] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fa3a0 00:58:25.012 [2024-06-11 03:55:06.411680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.012 [2024-06-11 03:55:06.411698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.420251] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f2948 00:58:25.272 [2024-06-11 03:55:06.420920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.420938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.429514] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e95a0 00:58:25.272 [2024-06-11 03:55:06.430269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.430287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.437923] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f0ff8 00:58:25.272 [2024-06-11 03:55:06.438662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.438680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.447292] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fdeb0 00:58:25.272 [2024-06-11 03:55:06.448141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.448159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.456593] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e7c50 00:58:25.272 [2024-06-11 03:55:06.457570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.457588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.464857] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e5a90 00:58:25.272 [2024-06-11 03:55:06.465583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.465603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.473610] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fda78 00:58:25.272 [2024-06-11 03:55:06.474333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.474352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.482673] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e73e0 00:58:25.272 [2024-06-11 03:55:06.483418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.483436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.491781] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fc998 00:58:25.272 [2024-06-11 03:55:06.492507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.492525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.500724] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fb8b8 00:58:25.272 [2024-06-11 03:55:06.501446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.501465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.509643] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190feb58 00:58:25.272 [2024-06-11 03:55:06.510432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.510450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.518796] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190df988 00:58:25.272 [2024-06-11 03:55:06.519512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.519529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.527694] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190de8a8 00:58:25.272 [2024-06-11 03:55:06.528413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.528431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.536626] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e84c0 00:58:25.272 [2024-06-11 03:55:06.537358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.537376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.545556] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e95a0 00:58:25.272 [2024-06-11 03:55:06.546189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.546207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.554471] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ea680 00:58:25.272 [2024-06-11 03:55:06.555102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.555121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.563346] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f92c0 00:58:25.272 [2024-06-11 03:55:06.563967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.563985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.572270] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e5220 00:58:25.272 [2024-06-11 03:55:06.572893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.572911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.581144] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f1430 00:58:25.272 [2024-06-11 03:55:06.581768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.581786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.590044] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ec840 00:58:25.272 [2024-06-11 03:55:06.590667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.272 [2024-06-11 03:55:06.590685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.272 [2024-06-11 03:55:06.598952] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190eb760 00:58:25.273 [2024-06-11 03:55:06.599580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.273 [2024-06-11 03:55:06.599598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.273 [2024-06-11 03:55:06.607828] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e4de8 00:58:25.273 [2024-06-11 03:55:06.608459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.273 [2024-06-11 03:55:06.608477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.273 [2024-06-11 03:55:06.616746] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fe2e8 00:58:25.273 [2024-06-11 03:55:06.617387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.273 [2024-06-11 03:55:06.617405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.273 [2024-06-11 03:55:06.626973] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e6fa8 00:58:25.273 [2024-06-11 03:55:06.628089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.273 [2024-06-11 03:55:06.628110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.273 [2024-06-11 03:55:06.636307] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f0ff8 00:58:25.273 [2024-06-11 03:55:06.637619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.273 [2024-06-11 03:55:06.637638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:25.273 [2024-06-11 03:55:06.645663] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190eff18 00:58:25.273 [2024-06-11 03:55:06.647067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.273 [2024-06-11 03:55:06.647085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:25.273 [2024-06-11 03:55:06.655166] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ebfd0 00:58:25.273 [2024-06-11 03:55:06.656698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.273 [2024-06-11 03:55:06.656717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:25.273 [2024-06-11 03:55:06.661474] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e4140 00:58:25.273 [2024-06-11 03:55:06.662179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.273 [2024-06-11 03:55:06.662197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:25.273 [2024-06-11 03:55:06.669942] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e95a0 00:58:25.273 [2024-06-11 03:55:06.670647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.273 [2024-06-11 03:55:06.670665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:25.532 [2024-06-11 03:55:06.680006] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190df118 00:58:25.532 [2024-06-11 03:55:06.680788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.532 [2024-06-11 03:55:06.680807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.532 [2024-06-11 03:55:06.689393] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ebb98 00:58:25.532 [2024-06-11 03:55:06.690347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.532 [2024-06-11 03:55:06.690366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:25.532 [2024-06-11 03:55:06.697826] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f4b08 00:58:25.532 [2024-06-11 03:55:06.698762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.532 [2024-06-11 03:55:06.698780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:25.532 [2024-06-11 03:55:06.706853] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190eea00 00:58:25.532 [2024-06-11 03:55:06.707722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.532 [2024-06-11 03:55:06.707740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.716028] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190edd58 00:58:25.533 [2024-06-11 03:55:06.716966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.716984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.725080] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fc998 00:58:25.533 [2024-06-11 03:55:06.725939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.725956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.733971] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fc128 00:58:25.533 [2024-06-11 03:55:06.734848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.734866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.742918] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e4de8 00:58:25.533 [2024-06-11 03:55:06.743786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.743804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.751851] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fe2e8 00:58:25.533 [2024-06-11 03:55:06.752715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.752733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.760757] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e6fa8 00:58:25.533 [2024-06-11 03:55:06.761656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.761674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.769897] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e7818 00:58:25.533 [2024-06-11 03:55:06.770785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.770804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.778811] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f2510 00:58:25.533 [2024-06-11 03:55:06.779677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.779695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.787756] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f35f0 00:58:25.533 [2024-06-11 03:55:06.788623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.788642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.796658] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f0ff8 00:58:25.533 [2024-06-11 03:55:06.797524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.797542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.805564] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e27f0 00:58:25.533 [2024-06-11 03:55:06.806427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.806446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.814766] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e38d0 00:58:25.533 [2024-06-11 03:55:06.815514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.815533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.823856] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e5ec8 00:58:25.533 [2024-06-11 03:55:06.824845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.824863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.832792] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e9168 00:58:25.533 [2024-06-11 03:55:06.833777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.833795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.841709] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e8088 00:58:25.533 [2024-06-11 03:55:06.842691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.842709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.850594] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fac10 00:58:25.533 [2024-06-11 03:55:06.851575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.851593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.859534] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f9f68 00:58:25.533 [2024-06-11 03:55:06.860519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.860540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.867627] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ea248 00:58:25.533 [2024-06-11 03:55:06.868994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.869019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.876031] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f96f8 00:58:25.533 [2024-06-11 03:55:06.876686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.876704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.884414] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fb048 00:58:25.533 [2024-06-11 03:55:06.885127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.885145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.893434] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ddc00 00:58:25.533 [2024-06-11 03:55:06.894084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.894102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.903899] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fa7d8 00:58:25.533 [2024-06-11 03:55:06.904977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.904995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.912347] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e95a0 00:58:25.533 [2024-06-11 03:55:06.913415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.913433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.921631] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f1ca0 00:58:25.533 [2024-06-11 03:55:06.922732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.922750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:58:25.533 [2024-06-11 03:55:06.930664] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f0788 00:58:25.533 [2024-06-11 03:55:06.931879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.533 [2024-06-11 03:55:06.931897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:06.940278] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190eaab8 00:58:25.793 [2024-06-11 03:55:06.941499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:06.941518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:06.948106] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ebfd0 00:58:25.793 [2024-06-11 03:55:06.949480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:06.949499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:06.956502] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f2d80 00:58:25.793 [2024-06-11 03:55:06.957143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:06.957161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:06.965389] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f0788 00:58:25.793 [2024-06-11 03:55:06.966022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:06.966040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:06.974267] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190de470 00:58:25.793 [2024-06-11 03:55:06.974907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:06.974924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:06.983480] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f4298 00:58:25.793 [2024-06-11 03:55:06.984327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:06.984344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:06.991897] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190dfdc0 00:58:25.793 [2024-06-11 03:55:06.992711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:06.992729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:07.001207] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e6738 00:58:25.793 [2024-06-11 03:55:07.002153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:07.002172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:07.010270] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e1b48 00:58:25.793 [2024-06-11 03:55:07.011139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:07.011157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:07.019119] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fcdd0 00:58:25.793 [2024-06-11 03:55:07.019999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:07.020023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:07.028973] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e99d8 00:58:25.793 [2024-06-11 03:55:07.029986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:07.030005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:07.038348] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ef6a8 00:58:25.793 [2024-06-11 03:55:07.039449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:07.039467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:07.045851] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e1f80 00:58:25.793 [2024-06-11 03:55:07.046361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:07.046380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:07.054932] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e6b70 00:58:25.793 [2024-06-11 03:55:07.055670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:07.055688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:07.063834] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e8d30 00:58:25.793 [2024-06-11 03:55:07.064574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:07.064591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:07.072722] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ef6a8 00:58:25.793 [2024-06-11 03:55:07.073457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:07.073475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:07.082794] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f9f68 00:58:25.793 [2024-06-11 03:55:07.084029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:07.084047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:07.092093] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ecc78 00:58:25.793 [2024-06-11 03:55:07.093469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:07.093491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:07.099973] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e7c50 00:58:25.793 [2024-06-11 03:55:07.100763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:07.100782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:07.108365] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f46d0 00:58:25.793 [2024-06-11 03:55:07.109738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:07.109756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:58:25.793 [2024-06-11 03:55:07.116140] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f9f68 00:58:25.793 [2024-06-11 03:55:07.116756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.793 [2024-06-11 03:55:07.116774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:25.794 [2024-06-11 03:55:07.125483] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190feb58 00:58:25.794 [2024-06-11 03:55:07.126212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.794 [2024-06-11 03:55:07.126232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:58:25.794 [2024-06-11 03:55:07.134719] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e5220 00:58:25.794 [2024-06-11 03:55:07.135555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.794 [2024-06-11 03:55:07.135573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:25.794 [2024-06-11 03:55:07.144041] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f1868 00:58:25.794 [2024-06-11 03:55:07.145079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.794 [2024-06-11 03:55:07.145097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:25.794 [2024-06-11 03:55:07.153357] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f7da8 00:58:25.794 [2024-06-11 03:55:07.154509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.794 [2024-06-11 03:55:07.154527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:25.794 [2024-06-11 03:55:07.162645] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190feb58 00:58:25.794 [2024-06-11 03:55:07.163929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.794 [2024-06-11 03:55:07.163947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:25.794 [2024-06-11 03:55:07.171961] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f6020 00:58:25.794 [2024-06-11 03:55:07.173366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.794 [2024-06-11 03:55:07.173391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:58:25.794 [2024-06-11 03:55:07.181293] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e3498 00:58:25.794 [2024-06-11 03:55:07.182826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.794 [2024-06-11 03:55:07.182844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:58:25.794 [2024-06-11 03:55:07.187577] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e1b48 00:58:25.794 [2024-06-11 03:55:07.188292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:25.794 [2024-06-11 03:55:07.188310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:58:26.053 [2024-06-11 03:55:07.196772] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e4de8 00:58:26.053 [2024-06-11 03:55:07.197423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.197442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.205827] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f6890 00:58:26.054 [2024-06-11 03:55:07.206463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.206481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.214711] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190eb760 00:58:26.054 [2024-06-11 03:55:07.215344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.215363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.223635] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ee5c8 00:58:26.054 [2024-06-11 03:55:07.224266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.224284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.232527] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190eff18 00:58:26.054 [2024-06-11 03:55:07.233159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.233177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.241412] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f8e88 00:58:26.054 [2024-06-11 03:55:07.242038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.242057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.249719] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ed0b0 00:58:26.054 [2024-06-11 03:55:07.250437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.250456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.259614] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e99d8 00:58:26.054 [2024-06-11 03:55:07.260366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.260385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.268821] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fb8b8 00:58:26.054 [2024-06-11 03:55:07.269803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.269821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.277495] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f1868 00:58:26.054 [2024-06-11 03:55:07.278473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.278492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.286822] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e5220 00:58:26.054 [2024-06-11 03:55:07.287896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.287914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.296199] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ef270 00:58:26.054 [2024-06-11 03:55:07.297381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.297399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.304054] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f0350 00:58:26.054 [2024-06-11 03:55:07.304547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.304566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.312926] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190feb58 00:58:26.054 [2024-06-11 03:55:07.313424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.313443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.321991] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e27f0 00:58:26.054 [2024-06-11 03:55:07.322821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.322839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.331034] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190eea00 00:58:26.054 [2024-06-11 03:55:07.331771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.331789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.340165] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e8d30 00:58:26.054 [2024-06-11 03:55:07.340767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.340786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.349476] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190eb328 00:58:26.054 [2024-06-11 03:55:07.350202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.350220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.357730] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fb480 00:58:26.054 [2024-06-11 03:55:07.359153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.359172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.366117] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f7100 00:58:26.054 [2024-06-11 03:55:07.366730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.366747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.375284] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ee5c8 00:58:26.054 [2024-06-11 03:55:07.376023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.376041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.384659] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e95a0 00:58:26.054 [2024-06-11 03:55:07.385598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.385616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.393085] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e8d30 00:58:26.054 [2024-06-11 03:55:07.394020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.394037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.402372] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e9168 00:58:26.054 [2024-06-11 03:55:07.403427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.403448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.412155] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f0ff8 00:58:26.054 [2024-06-11 03:55:07.413241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.413258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.419666] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f92c0 00:58:26.054 [2024-06-11 03:55:07.420160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.420179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:26.054 [2024-06-11 03:55:07.429972] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190feb58 00:58:26.054 [2024-06-11 03:55:07.431173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.054 [2024-06-11 03:55:07.431192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:58:26.055 [2024-06-11 03:55:07.437842] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e2c28 00:58:26.055 [2024-06-11 03:55:07.438443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.055 [2024-06-11 03:55:07.438462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:26.055 [2024-06-11 03:55:07.447036] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fe2e8 00:58:26.055 [2024-06-11 03:55:07.447871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.055 [2024-06-11 03:55:07.447888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:58:26.055 [2024-06-11 03:55:07.456341] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e5a90 00:58:26.055 [2024-06-11 03:55:07.457437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.055 [2024-06-11 03:55:07.457456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.465610] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f4f40 00:58:26.314 [2024-06-11 03:55:07.466577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.466595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.474500] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fb048 00:58:26.314 [2024-06-11 03:55:07.475472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.475490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.483698] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ea680 00:58:26.314 [2024-06-11 03:55:07.484778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.484797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.491214] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e99d8 00:58:26.314 [2024-06-11 03:55:07.491691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.491709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.499237] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fc560 00:58:26.314 [2024-06-11 03:55:07.499914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.499931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.508562] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fe720 00:58:26.314 [2024-06-11 03:55:07.509359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.509377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.518475] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e4140 00:58:26.314 [2024-06-11 03:55:07.519314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.519333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.527594] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e3d08 00:58:26.314 [2024-06-11 03:55:07.528463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.528481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.536616] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ebfd0 00:58:26.314 [2024-06-11 03:55:07.537458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.537476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.545516] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f0788 00:58:26.314 [2024-06-11 03:55:07.546353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.546371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.553823] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190df550 00:58:26.314 [2024-06-11 03:55:07.554742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.554760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.563635] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e01f8 00:58:26.314 [2024-06-11 03:55:07.564589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.564607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.572916] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e6fa8 00:58:26.314 [2024-06-11 03:55:07.574070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.574088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.580245] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f1430 00:58:26.314 [2024-06-11 03:55:07.580835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.580853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.589211] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e4de8 00:58:26.314 [2024-06-11 03:55:07.589795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.589812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.599247] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e0630 00:58:26.314 [2024-06-11 03:55:07.600393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.600411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.607116] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f4f40 00:58:26.314 [2024-06-11 03:55:07.607578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.314 [2024-06-11 03:55:07.607597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:26.314 [2024-06-11 03:55:07.615999] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e7c50 00:58:26.315 [2024-06-11 03:55:07.616544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.315 [2024-06-11 03:55:07.616562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:26.315 [2024-06-11 03:55:07.624881] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fdeb0 00:58:26.315 [2024-06-11 03:55:07.625413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.315 [2024-06-11 03:55:07.625432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:26.315 [2024-06-11 03:55:07.634342] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ea248 00:58:26.315 [2024-06-11 03:55:07.634930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.315 [2024-06-11 03:55:07.634953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:26.315 [2024-06-11 03:55:07.643476] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e8d30 00:58:26.315 [2024-06-11 03:55:07.644304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.315 [2024-06-11 03:55:07.644323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:26.315 [2024-06-11 03:55:07.652427] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f2d80 00:58:26.315 [2024-06-11 03:55:07.653293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.315 [2024-06-11 03:55:07.653311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:26.315 [2024-06-11 03:55:07.661389] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f2948 00:58:26.315 [2024-06-11 03:55:07.662221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.315 [2024-06-11 03:55:07.662238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:26.315 [2024-06-11 03:55:07.670273] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e0ea0 00:58:26.315 [2024-06-11 03:55:07.671099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.315 [2024-06-11 03:55:07.671116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:26.315 [2024-06-11 03:55:07.679194] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ef270 00:58:26.315 [2024-06-11 03:55:07.680017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.315 [2024-06-11 03:55:07.680035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:26.315 [2024-06-11 03:55:07.688351] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190df550 00:58:26.315 [2024-06-11 03:55:07.689059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.315 [2024-06-11 03:55:07.689078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:26.315 [2024-06-11 03:55:07.696664] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fe720 00:58:26.315 [2024-06-11 03:55:07.698074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.315 [2024-06-11 03:55:07.698093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:26.315 [2024-06-11 03:55:07.704450] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f6458 00:58:26.315 [2024-06-11 03:55:07.705122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.315 [2024-06-11 03:55:07.705140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:26.315 [2024-06-11 03:55:07.713775] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ed0b0 00:58:26.315 [2024-06-11 03:55:07.714597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.315 [2024-06-11 03:55:07.714615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:26.574 [2024-06-11 03:55:07.723336] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190feb58 00:58:26.574 [2024-06-11 03:55:07.724263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.574 [2024-06-11 03:55:07.724281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:26.574 [2024-06-11 03:55:07.732694] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f7538 00:58:26.574 [2024-06-11 03:55:07.733725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.574 [2024-06-11 03:55:07.733744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:26.574 [2024-06-11 03:55:07.741989] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f7da8 00:58:26.574 [2024-06-11 03:55:07.743167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.574 [2024-06-11 03:55:07.743186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:26.574 [2024-06-11 03:55:07.751306] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ed0b0 00:58:26.575 [2024-06-11 03:55:07.752574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.752592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.760661] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190efae0 00:58:26.575 [2024-06-11 03:55:07.762051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.762068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.769963] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e6300 00:58:26.575 [2024-06-11 03:55:07.771463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.771482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.776474] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f5378 00:58:26.575 [2024-06-11 03:55:07.777070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.777087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.785819] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f31b8 00:58:26.575 [2024-06-11 03:55:07.786537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.786555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.795164] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e88f8 00:58:26.575 [2024-06-11 03:55:07.795998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.796021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.804503] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f7100 00:58:26.575 [2024-06-11 03:55:07.805461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.805479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.813797] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ebb98 00:58:26.575 [2024-06-11 03:55:07.814866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.814884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.821317] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ec408 00:58:26.575 [2024-06-11 03:55:07.821789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.821807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.830635] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190feb58 00:58:26.575 [2024-06-11 03:55:07.831227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.831246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.839924] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ec840 00:58:26.575 [2024-06-11 03:55:07.840638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.840657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.848186] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fe720 00:58:26.575 [2024-06-11 03:55:07.849597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.849616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.855991] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e5ec8 00:58:26.575 [2024-06-11 03:55:07.856672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.856690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.865295] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f5378 00:58:26.575 [2024-06-11 03:55:07.866096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.866117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.874628] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190feb58 00:58:26.575 [2024-06-11 03:55:07.875481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.875500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.883932] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f7538 00:58:26.575 [2024-06-11 03:55:07.884883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.884901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.893238] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f7da8 00:58:26.575 [2024-06-11 03:55:07.894294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.894312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.902630] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f5378 00:58:26.575 [2024-06-11 03:55:07.903805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.903823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.911910] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190efae0 00:58:26.575 [2024-06-11 03:55:07.913215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.913234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.921266] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e6300 00:58:26.575 [2024-06-11 03:55:07.922752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.922770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.927547] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e6fa8 00:58:26.575 [2024-06-11 03:55:07.928235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.928253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.936589] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fb8b8 00:58:26.575 [2024-06-11 03:55:07.937315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.937333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.945752] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e1b48 00:58:26.575 [2024-06-11 03:55:07.946469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.946488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.954702] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e1f80 00:58:26.575 [2024-06-11 03:55:07.955387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.955405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.963547] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190ef270 00:58:26.575 [2024-06-11 03:55:07.964175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.964193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:26.575 [2024-06-11 03:55:07.972451] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190e95a0 00:58:26.575 [2024-06-11 03:55:07.973153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.575 [2024-06-11 03:55:07.973171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:26.834 [2024-06-11 03:55:07.981568] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f0bc0 00:58:26.834 [2024-06-11 03:55:07.982273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.834 [2024-06-11 03:55:07.982292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:26.834 [2024-06-11 03:55:07.990537] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190f57b0 00:58:26.834 [2024-06-11 03:55:07.991233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.834 [2024-06-11 03:55:07.991250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:26.834 [2024-06-11 03:55:07.999468] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc17d0) with pdu=0x2000190fdeb0 00:58:26.834 [2024-06-11 03:55:08.000169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:58:26.834 [2024-06-11 03:55:08.000187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:58:26.834 00:58:26.834 Latency(us) 00:58:26.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:26.834 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:58:26.834 nvme0n1 : 2.00 28558.64 111.56 0.00 0.00 4476.22 1755.43 11109.91 00:58:26.834 =================================================================================================================== 00:58:26.834 Total : 28558.64 111.56 0.00 0.00 4476.22 1755.43 11109.91 00:58:26.834 0 00:58:26.834 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:58:26.834 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:58:26.834 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:58:26.834 | .driver_specific 00:58:26.834 | .nvme_error 00:58:26.834 | .status_code 00:58:26.834 | .command_transient_transport_error' 00:58:26.834 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:58:26.834 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 224 > 0 )) 00:58:26.834 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2401044 00:58:26.834 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2401044 ']' 00:58:26.834 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2401044 00:58:26.834 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:58:26.834 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:58:26.834 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2401044 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2401044' 00:58:27.093 killing process with pid 2401044 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2401044 00:58:27.093 Received shutdown signal, test time was about 2.000000 seconds 00:58:27.093 00:58:27.093 Latency(us) 00:58:27.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:27.093 =================================================================================================================== 00:58:27.093 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2401044 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2401632 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2401632 /var/tmp/bperf.sock 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # '[' -z 2401632 ']' 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local max_retries=100 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:58:27.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # xtrace_disable 00:58:27.093 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:27.093 [2024-06-11 03:55:08.466560] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:58:27.093 [2024-06-11 03:55:08.466605] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2401632 ] 00:58:27.093 I/O size of 131072 is greater than zero copy threshold (65536). 00:58:27.093 Zero copy mechanism will not be used. 00:58:27.093 EAL: No free 2048 kB hugepages reported on node 1 00:58:27.352 [2024-06-11 03:55:08.525686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:27.352 [2024-06-11 03:55:08.564273] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:58:27.352 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:58:27.352 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@863 -- # return 0 00:58:27.352 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:58:27.352 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:58:27.611 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:58:27.611 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:27.611 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:27.611 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:27.611 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:58:27.611 03:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:58:27.869 nvme0n1 00:58:27.869 03:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:58:27.869 03:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:27.869 03:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:27.869 03:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:27.869 03:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:58:27.869 03:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:58:27.869 I/O size of 131072 is greater than zero copy threshold (65536). 00:58:27.869 Zero copy mechanism will not be used. 00:58:27.869 Running I/O for 2 seconds... 00:58:27.869 [2024-06-11 03:55:09.243297] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:27.869 [2024-06-11 03:55:09.243748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:27.869 [2024-06-11 03:55:09.243775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:27.869 [2024-06-11 03:55:09.251493] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:27.869 [2024-06-11 03:55:09.251893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:27.869 [2024-06-11 03:55:09.251916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:27.870 [2024-06-11 03:55:09.258644] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:27.870 [2024-06-11 03:55:09.259040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:27.870 [2024-06-11 03:55:09.259063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:27.870 [2024-06-11 03:55:09.264533] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:27.870 [2024-06-11 03:55:09.264898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:27.870 [2024-06-11 03:55:09.264918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:27.870 [2024-06-11 03:55:09.270088] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:27.870 [2024-06-11 03:55:09.270459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:27.870 [2024-06-11 03:55:09.270480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.275036] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.275412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.275433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.280000] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.280378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.280398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.284838] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.285196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.285216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.289663] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.290044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.290064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.294490] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.294862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.294882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.299279] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.299646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.299665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.304097] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.304475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.304497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.308917] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.309280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.309301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.314184] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.314560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.314579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.319612] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.319988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.320007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.324547] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.324910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.324929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.329358] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.329722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.329741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.335132] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.335531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.335551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.342667] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.343062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.343081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.348942] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.349040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.349058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.355137] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.355524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.355543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.360546] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.360919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.360938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.365612] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.365979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.365998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.370682] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.371064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.371083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.375498] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.375879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.375898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.380867] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.381237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.381256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.386863] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.387247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.387267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.393774] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.394159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.394178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.130 [2024-06-11 03:55:09.400139] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.130 [2024-06-11 03:55:09.400227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.130 [2024-06-11 03:55:09.400252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.406966] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.407359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.407379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.413746] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.413816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.413837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.420703] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.421108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.421127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.428049] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.428438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.428457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.434564] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.434927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.434945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.441485] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.441867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.441886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.448465] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.448866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.448885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.454972] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.455358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.455378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.461441] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.461836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.461856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.468619] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.468997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.469021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.476027] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.476439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.476459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.482944] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.483343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.483362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.489349] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.489728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.489748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.495541] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.495955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.495975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.502261] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.502658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.502677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.508448] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.508836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.508855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.514101] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.514507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.514525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.519692] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.520075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.520094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.525722] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.526102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.526121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.131 [2024-06-11 03:55:09.531376] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.131 [2024-06-11 03:55:09.531757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.131 [2024-06-11 03:55:09.531776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.537165] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.537544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.537563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.542770] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.543142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.543161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.548344] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.548720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.548739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.553864] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.554240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.554259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.558887] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.559265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.559284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.563713] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.564091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.564114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.568605] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.568986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.569005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.573468] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.573846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.573864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.578299] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.578654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.578673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.583682] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.584057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.584076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.588867] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.589240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.589260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.594429] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.594790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.594809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.599940] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.600323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.600342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.605442] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.605835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.605853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.611895] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.612299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.612318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.618581] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.618957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.618976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.624337] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.624702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.624722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.630226] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.630603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.630622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.636089] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.636466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.391 [2024-06-11 03:55:09.636485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.391 [2024-06-11 03:55:09.642041] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.391 [2024-06-11 03:55:09.642412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.642430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.649488] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.649857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.649875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.658263] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.658666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.658685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.665797] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.666179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.666199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.673891] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.674282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.674301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.682067] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.682461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.682479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.690998] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.691404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.691422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.699696] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.700092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.700111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.707539] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.707927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.707946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.715532] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.715927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.715946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.722704] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.723085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.723104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.728925] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.729329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.729348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.735787] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.736170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.736193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.743722] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.744119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.744137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.752182] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.752571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.752590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.761217] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.761596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.761615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.769508] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.769916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.769935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.778409] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.778811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.778830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.392 [2024-06-11 03:55:09.787610] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.392 [2024-06-11 03:55:09.788006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.392 [2024-06-11 03:55:09.788029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.795962] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.796339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.796359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.804621] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.805019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.805039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.813408] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.813797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.813816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.821028] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.821413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.821433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.828145] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.828265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.828283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.835219] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.835563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.835582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.841314] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.841692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.841710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.847836] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.848189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.848208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.853387] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.853736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.853755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.858518] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.858866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.858884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.863586] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.863927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.863949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.869242] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.869597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.869616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.874423] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.874830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.874848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.881356] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.881717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.881736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.888760] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.889175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.889194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.895477] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.895877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.895895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.901842] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.902198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.902217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.908776] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.909189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.652 [2024-06-11 03:55:09.909207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.652 [2024-06-11 03:55:09.916516] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.652 [2024-06-11 03:55:09.916990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:09.917015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:09.924827] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:09.925288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:09.925306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:09.933086] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:09.933560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:09.933578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:09.941273] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:09.941731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:09.941749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:09.949318] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:09.949747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:09.949766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:09.957222] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:09.957665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:09.957683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:09.964771] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:09.965214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:09.965233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:09.972554] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:09.973026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:09.973045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:09.980047] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:09.980487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:09.980505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:09.987855] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:09.988302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:09.988320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:09.995371] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:09.995810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:09.995828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:10.001924] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:10.002299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:10.002319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:10.007566] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:10.007948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:10.007967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:10.013506] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:10.013863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:10.013882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:10.018740] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:10.019100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:10.019120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:10.023524] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:10.023876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:10.023896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:10.030351] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:10.031186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:10.031210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:10.035682] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:10.036045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:10.036064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:10.040401] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:10.040749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:10.040772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:10.045423] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:10.045770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:10.045789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:10.050200] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:10.050549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:10.050567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.653 [2024-06-11 03:55:10.054863] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.653 [2024-06-11 03:55:10.055206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.653 [2024-06-11 03:55:10.055224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.913 [2024-06-11 03:55:10.059532] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.059886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.059904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.064149] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.064498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.064515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.068680] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.068999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.069023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.073578] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.073903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.073921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.078503] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.078830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.078849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.083538] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.083859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.083877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.088836] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.089164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.089182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.095091] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.095501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.095518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.101747] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.102100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.102118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.108625] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.109054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.109071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.115516] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.115847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.115865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.121179] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.121501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.121518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.126966] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.127280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.127297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.131868] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.132202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.132220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.136573] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.136897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.136914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.141286] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.141599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.141616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.145820] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.146144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.146162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.150365] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.150693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.150711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.914 [2024-06-11 03:55:10.154865] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.914 [2024-06-11 03:55:10.155188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.914 [2024-06-11 03:55:10.155206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.159383] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.159701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.159719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.163863] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.164195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.164213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.168425] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.168749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.168767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.172965] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.173285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.173306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.177489] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.177795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.177813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.182114] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.182448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.182467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.186702] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.187033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.187052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.191304] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.191624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.191642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.195879] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.196210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.196229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.200846] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.201180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.201199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.206439] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.206763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.206780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.212212] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.212540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.212558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.217484] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.217819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.217837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.223039] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.223359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.223377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.229001] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.229333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.229351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.235054] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.235384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.235402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.240931] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.241257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.241274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.247093] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.247413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.247430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.252486] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.252837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.915 [2024-06-11 03:55:10.252855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.915 [2024-06-11 03:55:10.257874] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.915 [2024-06-11 03:55:10.258210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.916 [2024-06-11 03:55:10.258228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.916 [2024-06-11 03:55:10.262739] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.916 [2024-06-11 03:55:10.263068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.916 [2024-06-11 03:55:10.263086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.916 [2024-06-11 03:55:10.267410] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.916 [2024-06-11 03:55:10.267736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.916 [2024-06-11 03:55:10.267754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.916 [2024-06-11 03:55:10.272085] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.916 [2024-06-11 03:55:10.272429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.916 [2024-06-11 03:55:10.272447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.916 [2024-06-11 03:55:10.276653] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.916 [2024-06-11 03:55:10.276984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.916 [2024-06-11 03:55:10.277002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.916 [2024-06-11 03:55:10.281255] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.916 [2024-06-11 03:55:10.281584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.916 [2024-06-11 03:55:10.281602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.916 [2024-06-11 03:55:10.285788] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.916 [2024-06-11 03:55:10.286117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.916 [2024-06-11 03:55:10.286136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.916 [2024-06-11 03:55:10.290356] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.916 [2024-06-11 03:55:10.290681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.916 [2024-06-11 03:55:10.290698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.916 [2024-06-11 03:55:10.294924] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.916 [2024-06-11 03:55:10.295244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.916 [2024-06-11 03:55:10.295261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:28.916 [2024-06-11 03:55:10.299390] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.916 [2024-06-11 03:55:10.299701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.916 [2024-06-11 03:55:10.299719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:28.916 [2024-06-11 03:55:10.303882] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.916 [2024-06-11 03:55:10.304211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.916 [2024-06-11 03:55:10.304233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:28.916 [2024-06-11 03:55:10.308373] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.916 [2024-06-11 03:55:10.308693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.916 [2024-06-11 03:55:10.308711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:28.916 [2024-06-11 03:55:10.312864] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:28.916 [2024-06-11 03:55:10.313189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:28.916 [2024-06-11 03:55:10.313207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.317392] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.317717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.176 [2024-06-11 03:55:10.317736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.321929] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.322264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.176 [2024-06-11 03:55:10.322283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.326463] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.326787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.176 [2024-06-11 03:55:10.326805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.330987] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.331316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.176 [2024-06-11 03:55:10.331334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.335495] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.335814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.176 [2024-06-11 03:55:10.335832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.339963] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.340286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.176 [2024-06-11 03:55:10.340305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.344474] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.344794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.176 [2024-06-11 03:55:10.344812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.348925] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.349246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.176 [2024-06-11 03:55:10.349264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.353419] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.353739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.176 [2024-06-11 03:55:10.353758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.357933] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.358265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.176 [2024-06-11 03:55:10.358283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.362628] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.362942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.176 [2024-06-11 03:55:10.362960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.367005] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.367333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.176 [2024-06-11 03:55:10.367350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.371479] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.371799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.176 [2024-06-11 03:55:10.371817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.379579] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.380038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.176 [2024-06-11 03:55:10.380056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.386762] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.387096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.176 [2024-06-11 03:55:10.387118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.392982] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.393330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.176 [2024-06-11 03:55:10.393348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.176 [2024-06-11 03:55:10.398173] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.176 [2024-06-11 03:55:10.398485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.398502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.402880] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.403221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.403239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.407671] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.407993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.408016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.412647] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.412970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.412988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.417823] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.418153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.418171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.423049] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.423377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.423395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.429641] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.429986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.430004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.435764] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.436080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.436099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.441491] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.441818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.441836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.449030] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.449477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.449495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.456649] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.456981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.457000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.462825] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.463151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.463169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.468175] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.468503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.468521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.473065] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.473375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.473393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.477698] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.478025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.478043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.483287] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.483602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.483620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.490120] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.490476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.490494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.498864] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.499336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.499355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.507354] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.507733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.507751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.514516] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.514885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.514902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.520947] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.521283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.521300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.525908] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.526236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.526254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.531289] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.531603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.531621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.539207] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.539644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.539662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.546423] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.546744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.546766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.553627] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.553976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.553994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.559905] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.560232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.560249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.177 [2024-06-11 03:55:10.566668] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.177 [2024-06-11 03:55:10.567090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.177 [2024-06-11 03:55:10.567108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.178 [2024-06-11 03:55:10.575295] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.178 [2024-06-11 03:55:10.575670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.178 [2024-06-11 03:55:10.575689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.436 [2024-06-11 03:55:10.582941] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.436 [2024-06-11 03:55:10.583400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.436 [2024-06-11 03:55:10.583418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.436 [2024-06-11 03:55:10.591372] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.436 [2024-06-11 03:55:10.591723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.591741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.598905] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.599293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.599311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.606456] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.606889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.606907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.614657] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.615089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.615106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.622345] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.622722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.622740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.629545] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.629963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.629980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.636959] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.637475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.637493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.644820] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.645215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.645232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.651969] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.652295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.652313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.660353] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.660852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.660869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.669884] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.670299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.670317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.677887] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.678223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.678241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.684250] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.684679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.684696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.691734] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.692156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.692174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.699609] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.700024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.700041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.706914] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.707424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.707442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.715059] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.715421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.715439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.721474] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.721848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.721866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.729466] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.729855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.729872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.736415] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.736764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.736782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.741797] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.742167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.742189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.747758] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.748126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.748143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.754375] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.754746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.754763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.761040] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.761380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.761398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.767638] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.768040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.768058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.775344] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.775733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.775751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.437 [2024-06-11 03:55:10.783396] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.437 [2024-06-11 03:55:10.783833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.437 [2024-06-11 03:55:10.783850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.438 [2024-06-11 03:55:10.792486] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.438 [2024-06-11 03:55:10.792911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.438 [2024-06-11 03:55:10.792930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.438 [2024-06-11 03:55:10.800504] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.438 [2024-06-11 03:55:10.800947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.438 [2024-06-11 03:55:10.800965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.438 [2024-06-11 03:55:10.807952] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.438 [2024-06-11 03:55:10.808372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.438 [2024-06-11 03:55:10.808389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.438 [2024-06-11 03:55:10.815531] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.438 [2024-06-11 03:55:10.815973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.438 [2024-06-11 03:55:10.815990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.438 [2024-06-11 03:55:10.823501] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.438 [2024-06-11 03:55:10.823896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.438 [2024-06-11 03:55:10.823913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.438 [2024-06-11 03:55:10.831711] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.438 [2024-06-11 03:55:10.832167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.438 [2024-06-11 03:55:10.832186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.438 [2024-06-11 03:55:10.839296] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.438 [2024-06-11 03:55:10.839735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.438 [2024-06-11 03:55:10.839752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.698 [2024-06-11 03:55:10.847394] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.698 [2024-06-11 03:55:10.847830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.698 [2024-06-11 03:55:10.847848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.698 [2024-06-11 03:55:10.855791] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.698 [2024-06-11 03:55:10.856234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.698 [2024-06-11 03:55:10.856252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.698 [2024-06-11 03:55:10.864019] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.698 [2024-06-11 03:55:10.864412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.698 [2024-06-11 03:55:10.864430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.698 [2024-06-11 03:55:10.872327] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.698 [2024-06-11 03:55:10.872744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.698 [2024-06-11 03:55:10.872766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.698 [2024-06-11 03:55:10.880625] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.698 [2024-06-11 03:55:10.881057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.698 [2024-06-11 03:55:10.881075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.698 [2024-06-11 03:55:10.889376] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.698 [2024-06-11 03:55:10.889777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.698 [2024-06-11 03:55:10.889795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.698 [2024-06-11 03:55:10.896022] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.698 [2024-06-11 03:55:10.896350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.698 [2024-06-11 03:55:10.896368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.698 [2024-06-11 03:55:10.902264] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.698 [2024-06-11 03:55:10.902577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.698 [2024-06-11 03:55:10.902594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.698 [2024-06-11 03:55:10.907876] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.698 [2024-06-11 03:55:10.908209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.698 [2024-06-11 03:55:10.908227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.698 [2024-06-11 03:55:10.913463] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.698 [2024-06-11 03:55:10.913818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.698 [2024-06-11 03:55:10.913836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.698 [2024-06-11 03:55:10.918567] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.698 [2024-06-11 03:55:10.918890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.698 [2024-06-11 03:55:10.918908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.698 [2024-06-11 03:55:10.923269] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:10.923586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:10.923604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:10.927894] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:10.928230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:10.928248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:10.932493] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:10.932818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:10.932836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:10.937719] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:10.938065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:10.938084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:10.944318] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:10.944717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:10.944734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:10.950370] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:10.950726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:10.950744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:10.956221] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:10.956539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:10.956557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:10.961217] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:10.961536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:10.961553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:10.966889] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:10.967225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:10.967242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:10.973307] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:10.973630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:10.973648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:10.978547] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:10.978866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:10.978883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:10.983614] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:10.983938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:10.983956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:10.988693] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:10.989026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:10.989044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:10.993436] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:10.993757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:10.993774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:10.998305] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:10.998630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:10.998647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:11.003465] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:11.003793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:11.003810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:11.009317] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:11.009627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:11.009645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:11.015108] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:11.015450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:11.015468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:11.020402] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:11.020724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:11.020745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:11.025491] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:11.025805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:11.025823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:11.030357] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:11.030687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:11.030704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:11.034987] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:11.035341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:11.035360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:11.039677] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:11.039998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:11.040022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:11.044288] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:11.044617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:11.044635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:11.048908] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:11.049233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:11.049251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:11.053450] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:11.053776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:11.053794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:11.058029] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:11.058343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.699 [2024-06-11 03:55:11.058361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.699 [2024-06-11 03:55:11.062515] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.699 [2024-06-11 03:55:11.062843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.700 [2024-06-11 03:55:11.062861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.700 [2024-06-11 03:55:11.067036] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.700 [2024-06-11 03:55:11.067358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.700 [2024-06-11 03:55:11.067375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.700 [2024-06-11 03:55:11.071619] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.700 [2024-06-11 03:55:11.071941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.700 [2024-06-11 03:55:11.071959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.700 [2024-06-11 03:55:11.076104] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.700 [2024-06-11 03:55:11.076422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.700 [2024-06-11 03:55:11.076440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.700 [2024-06-11 03:55:11.080641] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.700 [2024-06-11 03:55:11.080950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.700 [2024-06-11 03:55:11.080967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.700 [2024-06-11 03:55:11.085129] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.700 [2024-06-11 03:55:11.085460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.700 [2024-06-11 03:55:11.085478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.700 [2024-06-11 03:55:11.089827] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.700 [2024-06-11 03:55:11.090145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.700 [2024-06-11 03:55:11.090163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.700 [2024-06-11 03:55:11.094337] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.700 [2024-06-11 03:55:11.094677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.700 [2024-06-11 03:55:11.094695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.700 [2024-06-11 03:55:11.098884] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.700 [2024-06-11 03:55:11.099212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.700 [2024-06-11 03:55:11.099230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.103445] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.103768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.103786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.108022] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.108336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.108354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.112559] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.112876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.112894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.117131] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.117444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.117462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.121550] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.121859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.121876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.126039] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.126363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.126381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.130474] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.130793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.130810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.134873] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.135205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.135223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.139324] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.139648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.139670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.143792] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.144116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.144134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.148257] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.148575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.148592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.152681] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.152987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.153004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.157120] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.157464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.157482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.161631] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.161956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.161974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.166125] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.166440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.166459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.170672] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.171004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.171028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.175177] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.175503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.175521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.179627] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.179948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.179966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.184115] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.184442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.184460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.188558] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.188869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.188887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.192999] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.193334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.193352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.197500] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.197814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.960 [2024-06-11 03:55:11.197832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.960 [2024-06-11 03:55:11.201973] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.960 [2024-06-11 03:55:11.202305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.961 [2024-06-11 03:55:11.202324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.961 [2024-06-11 03:55:11.206608] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.961 [2024-06-11 03:55:11.206938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.961 [2024-06-11 03:55:11.206955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.961 [2024-06-11 03:55:11.211159] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.961 [2024-06-11 03:55:11.211479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.961 [2024-06-11 03:55:11.211497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.961 [2024-06-11 03:55:11.215705] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.961 [2024-06-11 03:55:11.216036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.961 [2024-06-11 03:55:11.216055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:58:29.961 [2024-06-11 03:55:11.220278] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.961 [2024-06-11 03:55:11.220599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.961 [2024-06-11 03:55:11.220617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:58:29.961 [2024-06-11 03:55:11.224843] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.961 [2024-06-11 03:55:11.225157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.961 [2024-06-11 03:55:11.225175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:58:29.961 [2024-06-11 03:55:11.229323] tcp.c:2062:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bc1cb0) with pdu=0x2000190fef90 00:58:29.961 [2024-06-11 03:55:11.229652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:29.961 [2024-06-11 03:55:11.229669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:58:29.961 00:58:29.961 Latency(us) 00:58:29.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:29.961 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:58:29.961 nvme0n1 : 2.00 5180.95 647.62 0.00 0.00 3084.23 1708.62 13793.77 00:58:29.961 =================================================================================================================== 00:58:29.961 Total : 5180.95 647.62 0.00 0.00 3084.23 1708.62 13793.77 00:58:29.961 0 00:58:29.961 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:58:29.961 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:58:29.961 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:58:29.961 | .driver_specific 00:58:29.961 | .nvme_error 00:58:29.961 | .status_code 00:58:29.961 | .command_transient_transport_error' 00:58:29.961 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:58:30.220 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 334 > 0 )) 00:58:30.220 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2401632 00:58:30.220 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2401632 ']' 00:58:30.220 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2401632 00:58:30.220 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:58:30.220 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:58:30.220 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2401632 00:58:30.220 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:58:30.220 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:58:30.220 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2401632' 00:58:30.220 killing process with pid 2401632 00:58:30.220 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2401632 00:58:30.220 Received shutdown signal, test time was about 2.000000 seconds 00:58:30.220 00:58:30.220 Latency(us) 00:58:30.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:30.220 =================================================================================================================== 00:58:30.220 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:58:30.220 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2401632 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2399977 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@949 -- # '[' -z 2399977 ']' 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # kill -0 2399977 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # uname 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2399977 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2399977' 00:58:30.544 killing process with pid 2399977 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # kill 2399977 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # wait 2399977 00:58:30.544 00:58:30.544 real 0m13.508s 00:58:30.544 user 0m25.610s 00:58:30.544 sys 0m4.341s 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # xtrace_disable 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:58:30.544 ************************************ 00:58:30.544 END TEST nvmf_digest_error 00:58:30.544 ************************************ 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:58:30.544 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:58:30.544 rmmod nvme_tcp 00:58:30.803 rmmod nvme_fabrics 00:58:30.803 rmmod nvme_keyring 00:58:30.803 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:58:30.803 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:58:30.803 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:58:30.803 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2399977 ']' 00:58:30.803 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2399977 00:58:30.803 03:55:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@949 -- # '[' -z 2399977 ']' 00:58:30.803 03:55:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@953 -- # kill -0 2399977 00:58:30.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2399977) - No such process 00:58:30.803 03:55:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@976 -- # echo 'Process with pid 2399977 is not found' 00:58:30.803 Process with pid 2399977 is not found 00:58:30.803 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:58:30.803 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:58:30.803 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:58:30.803 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:58:30.803 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:58:30.803 03:55:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:30.803 03:55:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:58:30.803 03:55:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:32.706 03:55:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:58:32.706 00:58:32.706 real 0m36.395s 00:58:32.706 user 0m53.585s 00:58:32.706 sys 0m13.666s 00:58:32.706 03:55:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 00:58:32.706 03:55:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:58:32.706 ************************************ 00:58:32.706 END TEST nvmf_digest 00:58:32.706 ************************************ 00:58:32.706 03:55:14 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:58:32.706 03:55:14 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:58:32.706 03:55:14 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:58:32.706 03:55:14 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:58:32.706 03:55:14 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:58:32.706 03:55:14 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:58:32.706 03:55:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:58:32.706 ************************************ 00:58:32.706 START TEST nvmf_bdevperf 00:58:32.706 ************************************ 00:58:32.706 03:55:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:58:32.965 * Looking for test storage... 00:58:32.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:32.965 03:55:14 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:58:32.966 03:55:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:58:39.533 Found 0000:86:00.0 (0x8086 - 0x159b) 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:58:39.533 Found 0000:86:00.1 (0x8086 - 0x159b) 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:58:39.533 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:58:39.534 Found net devices under 0000:86:00.0: cvl_0_0 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:58:39.534 Found net devices under 0000:86:00.1: cvl_0_1 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:58:39.534 03:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:58:39.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:58:39.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:58:39.534 00:58:39.534 --- 10.0.0.2 ping statistics --- 00:58:39.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:39.534 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:58:39.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:58:39.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:58:39.534 00:58:39.534 --- 10.0.0.1 ping statistics --- 00:58:39.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:58:39.534 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2405920 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2405920 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 2405920 ']' 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:39.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:39.534 [2024-06-11 03:55:20.242581] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:58:39.534 [2024-06-11 03:55:20.242623] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:58:39.534 EAL: No free 2048 kB hugepages reported on node 1 00:58:39.534 [2024-06-11 03:55:20.306973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:58:39.534 [2024-06-11 03:55:20.347284] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:58:39.534 [2024-06-11 03:55:20.347328] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:58:39.534 [2024-06-11 03:55:20.347337] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:58:39.534 [2024-06-11 03:55:20.347343] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:58:39.534 [2024-06-11 03:55:20.347349] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:58:39.534 [2024-06-11 03:55:20.347458] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:58:39.534 [2024-06-11 03:55:20.347525] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:58:39.534 [2024-06-11 03:55:20.347528] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:39.534 [2024-06-11 03:55:20.484633] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:39.534 Malloc0 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:39.534 [2024-06-11 03:55:20.550200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:58:39.534 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:58:39.534 { 00:58:39.534 "params": { 00:58:39.534 "name": "Nvme$subsystem", 00:58:39.535 "trtype": "$TEST_TRANSPORT", 00:58:39.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:58:39.535 "adrfam": "ipv4", 00:58:39.535 "trsvcid": "$NVMF_PORT", 00:58:39.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:58:39.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:58:39.535 "hdgst": ${hdgst:-false}, 00:58:39.535 "ddgst": ${ddgst:-false} 00:58:39.535 }, 00:58:39.535 "method": "bdev_nvme_attach_controller" 00:58:39.535 } 00:58:39.535 EOF 00:58:39.535 )") 00:58:39.535 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:58:39.535 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:58:39.535 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:58:39.535 03:55:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:58:39.535 "params": { 00:58:39.535 "name": "Nvme1", 00:58:39.535 "trtype": "tcp", 00:58:39.535 "traddr": "10.0.0.2", 00:58:39.535 "adrfam": "ipv4", 00:58:39.535 "trsvcid": "4420", 00:58:39.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:58:39.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:58:39.535 "hdgst": false, 00:58:39.535 "ddgst": false 00:58:39.535 }, 00:58:39.535 "method": "bdev_nvme_attach_controller" 00:58:39.535 }' 00:58:39.535 [2024-06-11 03:55:20.597974] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:58:39.535 [2024-06-11 03:55:20.598027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2405948 ] 00:58:39.535 EAL: No free 2048 kB hugepages reported on node 1 00:58:39.535 [2024-06-11 03:55:20.658130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:39.535 [2024-06-11 03:55:20.698518] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:58:39.535 Running I/O for 1 seconds... 00:58:40.911 00:58:40.911 Latency(us) 00:58:40.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:40.911 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:58:40.911 Verification LBA range: start 0x0 length 0x4000 00:58:40.911 Nvme1n1 : 1.01 11257.78 43.98 0.00 0.00 11327.67 1497.97 17850.76 00:58:40.911 =================================================================================================================== 00:58:40.911 Total : 11257.78 43.98 0.00 0.00 11327.67 1497.97 17850.76 00:58:40.911 03:55:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2406180 00:58:40.911 03:55:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:58:40.911 03:55:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:58:40.911 03:55:22 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:58:40.911 03:55:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:58:40.912 03:55:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:58:40.912 03:55:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:58:40.912 03:55:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:58:40.912 { 00:58:40.912 "params": { 00:58:40.912 "name": "Nvme$subsystem", 00:58:40.912 "trtype": "$TEST_TRANSPORT", 00:58:40.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:58:40.912 "adrfam": "ipv4", 00:58:40.912 "trsvcid": "$NVMF_PORT", 00:58:40.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:58:40.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:58:40.912 "hdgst": ${hdgst:-false}, 00:58:40.912 "ddgst": ${ddgst:-false} 00:58:40.912 }, 00:58:40.912 "method": "bdev_nvme_attach_controller" 00:58:40.912 } 00:58:40.912 EOF 00:58:40.912 )") 00:58:40.912 03:55:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:58:40.912 03:55:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:58:40.912 03:55:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:58:40.912 03:55:22 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:58:40.912 "params": { 00:58:40.912 "name": "Nvme1", 00:58:40.912 "trtype": "tcp", 00:58:40.912 "traddr": "10.0.0.2", 00:58:40.912 "adrfam": "ipv4", 00:58:40.912 "trsvcid": "4420", 00:58:40.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:58:40.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:58:40.912 "hdgst": false, 00:58:40.912 "ddgst": false 00:58:40.912 }, 00:58:40.912 "method": "bdev_nvme_attach_controller" 00:58:40.912 }' 00:58:40.912 [2024-06-11 03:55:22.117991] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:58:40.912 [2024-06-11 03:55:22.118057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2406180 ] 00:58:40.912 EAL: No free 2048 kB hugepages reported on node 1 00:58:40.912 [2024-06-11 03:55:22.177535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:58:40.912 [2024-06-11 03:55:22.214734] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:58:41.168 Running I/O for 15 seconds... 00:58:43.698 03:55:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2405920 00:58:43.698 03:55:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:58:43.698 [2024-06-11 03:55:25.090174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.698 [2024-06-11 03:55:25.090501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.698 [2024-06-11 03:55:25.090509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.090993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.090999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.091007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.091019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.091027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.091033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.091041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.091048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.091056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.091062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.091070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.091077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.699 [2024-06-11 03:55:25.091085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.699 [2024-06-11 03:55:25.091091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.700 [2024-06-11 03:55:25.091664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.700 [2024-06-11 03:55:25.091679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.091990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.091997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.092005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.092017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.092025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.092032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.092040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.092047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.092055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.092061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.092069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.092076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.092084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.092090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.092099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.092105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.092115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:58:43.701 [2024-06-11 03:55:25.092121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.092128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13129c0 is same with the state(5) to be set 00:58:43.701 [2024-06-11 03:55:25.092137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:58:43.701 [2024-06-11 03:55:25.092142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:58:43.701 [2024-06-11 03:55:25.092148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111944 len:8 PRP1 0x0 PRP2 0x0 00:58:43.701 [2024-06-11 03:55:25.092156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:58:43.701 [2024-06-11 03:55:25.092197] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13129c0 was disconnected and freed. reset controller. 00:58:43.701 [2024-06-11 03:55:25.094989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.701 [2024-06-11 03:55:25.095045] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.701 [2024-06-11 03:55:25.095691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.701 [2024-06-11 03:55:25.095734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.701 [2024-06-11 03:55:25.095757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.701 [2024-06-11 03:55:25.096363] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.701 [2024-06-11 03:55:25.096536] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.701 [2024-06-11 03:55:25.096545] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.701 [2024-06-11 03:55:25.096552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.701 [2024-06-11 03:55:25.099299] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.960 [2024-06-11 03:55:25.108209] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.960 [2024-06-11 03:55:25.108659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.960 [2024-06-11 03:55:25.108676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.960 [2024-06-11 03:55:25.108684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.960 [2024-06-11 03:55:25.108856] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.960 [2024-06-11 03:55:25.109033] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.960 [2024-06-11 03:55:25.109041] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.960 [2024-06-11 03:55:25.109048] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.960 [2024-06-11 03:55:25.111794] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.960 [2024-06-11 03:55:25.121222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.960 [2024-06-11 03:55:25.121659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.960 [2024-06-11 03:55:25.121675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.960 [2024-06-11 03:55:25.121686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.960 [2024-06-11 03:55:25.121853] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.960 [2024-06-11 03:55:25.122031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.960 [2024-06-11 03:55:25.122040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.960 [2024-06-11 03:55:25.122047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.960 [2024-06-11 03:55:25.124641] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.961 [2024-06-11 03:55:25.133952] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.961 [2024-06-11 03:55:25.134390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.961 [2024-06-11 03:55:25.134433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.961 [2024-06-11 03:55:25.134455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.961 [2024-06-11 03:55:25.134877] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.961 [2024-06-11 03:55:25.135056] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.961 [2024-06-11 03:55:25.135064] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.961 [2024-06-11 03:55:25.135070] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.961 [2024-06-11 03:55:25.137672] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.961 [2024-06-11 03:55:25.146894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.961 [2024-06-11 03:55:25.147316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.961 [2024-06-11 03:55:25.147371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.961 [2024-06-11 03:55:25.147394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.961 [2024-06-11 03:55:25.147907] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.961 [2024-06-11 03:55:25.148071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.961 [2024-06-11 03:55:25.148079] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.961 [2024-06-11 03:55:25.148085] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.961 [2024-06-11 03:55:25.150702] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.961 [2024-06-11 03:55:25.159792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.961 [2024-06-11 03:55:25.160235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.961 [2024-06-11 03:55:25.160279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.961 [2024-06-11 03:55:25.160302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.961 [2024-06-11 03:55:25.160874] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.961 [2024-06-11 03:55:25.161056] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.961 [2024-06-11 03:55:25.161068] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.961 [2024-06-11 03:55:25.161075] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.961 [2024-06-11 03:55:25.163718] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.961 [2024-06-11 03:55:25.172628] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.961 [2024-06-11 03:55:25.173082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.961 [2024-06-11 03:55:25.173098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.961 [2024-06-11 03:55:25.173104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.961 [2024-06-11 03:55:25.173272] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.961 [2024-06-11 03:55:25.173440] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.961 [2024-06-11 03:55:25.173448] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.961 [2024-06-11 03:55:25.173454] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.961 [2024-06-11 03:55:25.176101] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.961 [2024-06-11 03:55:25.185482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.961 [2024-06-11 03:55:25.185848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.961 [2024-06-11 03:55:25.185863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.961 [2024-06-11 03:55:25.185872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.961 [2024-06-11 03:55:25.186035] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.961 [2024-06-11 03:55:25.186195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.961 [2024-06-11 03:55:25.186203] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.961 [2024-06-11 03:55:25.186209] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.961 [2024-06-11 03:55:25.188826] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.961 [2024-06-11 03:55:25.198384] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.961 [2024-06-11 03:55:25.198833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.961 [2024-06-11 03:55:25.198849] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.961 [2024-06-11 03:55:25.198856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.961 [2024-06-11 03:55:25.199033] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.961 [2024-06-11 03:55:25.199204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.961 [2024-06-11 03:55:25.199213] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.961 [2024-06-11 03:55:25.199219] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.961 [2024-06-11 03:55:25.201970] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.961 [2024-06-11 03:55:25.211154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.961 [2024-06-11 03:55:25.211589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.961 [2024-06-11 03:55:25.211606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.961 [2024-06-11 03:55:25.211613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.961 [2024-06-11 03:55:25.211779] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.961 [2024-06-11 03:55:25.211946] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.961 [2024-06-11 03:55:25.211954] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.961 [2024-06-11 03:55:25.211960] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.961 [2024-06-11 03:55:25.214518] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.961 [2024-06-11 03:55:25.224028] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.961 [2024-06-11 03:55:25.224466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.961 [2024-06-11 03:55:25.224509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.961 [2024-06-11 03:55:25.224531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.961 [2024-06-11 03:55:25.224968] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.961 [2024-06-11 03:55:25.225133] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.961 [2024-06-11 03:55:25.225141] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.961 [2024-06-11 03:55:25.225147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.961 [2024-06-11 03:55:25.227721] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.961 [2024-06-11 03:55:25.236892] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.961 [2024-06-11 03:55:25.237266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.961 [2024-06-11 03:55:25.237282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.961 [2024-06-11 03:55:25.237289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.961 [2024-06-11 03:55:25.237455] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.961 [2024-06-11 03:55:25.237621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.961 [2024-06-11 03:55:25.237629] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.962 [2024-06-11 03:55:25.237635] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.962 [2024-06-11 03:55:25.240260] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.962 [2024-06-11 03:55:25.249781] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.962 [2024-06-11 03:55:25.250216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.962 [2024-06-11 03:55:25.250258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.962 [2024-06-11 03:55:25.250280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.962 [2024-06-11 03:55:25.250861] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.962 [2024-06-11 03:55:25.251027] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.962 [2024-06-11 03:55:25.251035] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.962 [2024-06-11 03:55:25.251041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.962 [2024-06-11 03:55:25.253664] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.962 [2024-06-11 03:55:25.262657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.962 [2024-06-11 03:55:25.263181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.962 [2024-06-11 03:55:25.263224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.962 [2024-06-11 03:55:25.263247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.962 [2024-06-11 03:55:25.263825] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.962 [2024-06-11 03:55:25.263983] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.962 [2024-06-11 03:55:25.263990] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.962 [2024-06-11 03:55:25.263996] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.962 [2024-06-11 03:55:25.266570] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.962 [2024-06-11 03:55:25.275441] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.962 [2024-06-11 03:55:25.275902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.962 [2024-06-11 03:55:25.275944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.962 [2024-06-11 03:55:25.275966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.962 [2024-06-11 03:55:25.276558] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.962 [2024-06-11 03:55:25.277114] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.962 [2024-06-11 03:55:25.277123] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.962 [2024-06-11 03:55:25.277128] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.962 [2024-06-11 03:55:25.279703] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.962 [2024-06-11 03:55:25.288269] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.962 [2024-06-11 03:55:25.288621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.962 [2024-06-11 03:55:25.288636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.962 [2024-06-11 03:55:25.288643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.962 [2024-06-11 03:55:25.288800] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.962 [2024-06-11 03:55:25.288958] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.962 [2024-06-11 03:55:25.288965] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.962 [2024-06-11 03:55:25.288974] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.962 [2024-06-11 03:55:25.291547] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.962 [2024-06-11 03:55:25.301212] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.962 [2024-06-11 03:55:25.301628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.962 [2024-06-11 03:55:25.301643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.962 [2024-06-11 03:55:25.301649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.962 [2024-06-11 03:55:25.301807] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.962 [2024-06-11 03:55:25.301966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.962 [2024-06-11 03:55:25.301973] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.962 [2024-06-11 03:55:25.301979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.962 [2024-06-11 03:55:25.304550] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.962 [2024-06-11 03:55:25.314072] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.962 [2024-06-11 03:55:25.314500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.962 [2024-06-11 03:55:25.314516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.962 [2024-06-11 03:55:25.314523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.962 [2024-06-11 03:55:25.314689] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.962 [2024-06-11 03:55:25.314856] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.962 [2024-06-11 03:55:25.314864] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.962 [2024-06-11 03:55:25.314870] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.962 [2024-06-11 03:55:25.317431] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.962 [2024-06-11 03:55:25.326918] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.962 [2024-06-11 03:55:25.327308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.962 [2024-06-11 03:55:25.327350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.962 [2024-06-11 03:55:25.327373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.962 [2024-06-11 03:55:25.327950] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.962 [2024-06-11 03:55:25.328343] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.962 [2024-06-11 03:55:25.328352] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.962 [2024-06-11 03:55:25.328358] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.962 [2024-06-11 03:55:25.330885] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.962 [2024-06-11 03:55:25.339764] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.962 [2024-06-11 03:55:25.340205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.962 [2024-06-11 03:55:25.340247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.962 [2024-06-11 03:55:25.340269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.962 [2024-06-11 03:55:25.340848] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.962 [2024-06-11 03:55:25.341092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.962 [2024-06-11 03:55:25.341100] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.962 [2024-06-11 03:55:25.341106] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.962 [2024-06-11 03:55:25.345526] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:43.962 [2024-06-11 03:55:25.353529] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:43.962 [2024-06-11 03:55:25.353927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:43.962 [2024-06-11 03:55:25.353943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:43.962 [2024-06-11 03:55:25.353950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:43.962 [2024-06-11 03:55:25.354137] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:43.962 [2024-06-11 03:55:25.354320] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:43.962 [2024-06-11 03:55:25.354328] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:43.962 [2024-06-11 03:55:25.354335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:43.962 [2024-06-11 03:55:25.357256] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.222 [2024-06-11 03:55:25.366561] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.222 [2024-06-11 03:55:25.367034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.222 [2024-06-11 03:55:25.367051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.222 [2024-06-11 03:55:25.367057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.222 [2024-06-11 03:55:25.367224] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.222 [2024-06-11 03:55:25.367391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.222 [2024-06-11 03:55:25.367399] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.222 [2024-06-11 03:55:25.367405] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.222 [2024-06-11 03:55:25.370149] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.222 [2024-06-11 03:55:25.379503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.222 [2024-06-11 03:55:25.379949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.222 [2024-06-11 03:55:25.379965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.222 [2024-06-11 03:55:25.379971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.222 [2024-06-11 03:55:25.380142] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.222 [2024-06-11 03:55:25.380312] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.222 [2024-06-11 03:55:25.380320] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.222 [2024-06-11 03:55:25.380326] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.222 [2024-06-11 03:55:25.382988] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.222 [2024-06-11 03:55:25.392326] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.222 [2024-06-11 03:55:25.392710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.222 [2024-06-11 03:55:25.392725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.222 [2024-06-11 03:55:25.392732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.222 [2024-06-11 03:55:25.392898] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.222 [2024-06-11 03:55:25.393071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.222 [2024-06-11 03:55:25.393080] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.222 [2024-06-11 03:55:25.393087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.222 [2024-06-11 03:55:25.395629] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.222 [2024-06-11 03:55:25.405120] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.222 [2024-06-11 03:55:25.405588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.222 [2024-06-11 03:55:25.405630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.222 [2024-06-11 03:55:25.405652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.222 [2024-06-11 03:55:25.406246] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.222 [2024-06-11 03:55:25.406561] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.222 [2024-06-11 03:55:25.406568] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.222 [2024-06-11 03:55:25.406574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.222 [2024-06-11 03:55:25.409174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.222 [2024-06-11 03:55:25.417925] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.222 [2024-06-11 03:55:25.418376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.222 [2024-06-11 03:55:25.418419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.222 [2024-06-11 03:55:25.418440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.222 [2024-06-11 03:55:25.419036] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.222 [2024-06-11 03:55:25.419315] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.222 [2024-06-11 03:55:25.419323] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.222 [2024-06-11 03:55:25.419329] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.222 [2024-06-11 03:55:25.421922] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.222 [2024-06-11 03:55:25.430814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.222 [2024-06-11 03:55:25.431194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.222 [2024-06-11 03:55:25.431237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.222 [2024-06-11 03:55:25.431260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.222 [2024-06-11 03:55:25.431838] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.222 [2024-06-11 03:55:25.432433] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.222 [2024-06-11 03:55:25.432461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.222 [2024-06-11 03:55:25.432489] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.223 [2024-06-11 03:55:25.436824] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.223 [2024-06-11 03:55:25.444745] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.223 [2024-06-11 03:55:25.445160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.223 [2024-06-11 03:55:25.445204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.223 [2024-06-11 03:55:25.445226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.223 [2024-06-11 03:55:25.445803] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.223 [2024-06-11 03:55:25.446397] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.223 [2024-06-11 03:55:25.446407] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.223 [2024-06-11 03:55:25.446413] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.223 [2024-06-11 03:55:25.449333] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.223 [2024-06-11 03:55:25.457822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.223 [2024-06-11 03:55:25.458149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.223 [2024-06-11 03:55:25.458165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.223 [2024-06-11 03:55:25.458172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.223 [2024-06-11 03:55:25.458345] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.223 [2024-06-11 03:55:25.458518] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.223 [2024-06-11 03:55:25.458526] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.223 [2024-06-11 03:55:25.458533] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.223 [2024-06-11 03:55:25.461283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.223 [2024-06-11 03:55:25.471031] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.223 [2024-06-11 03:55:25.471456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.223 [2024-06-11 03:55:25.471472] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.223 [2024-06-11 03:55:25.471482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.223 [2024-06-11 03:55:25.471664] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.223 [2024-06-11 03:55:25.471847] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.223 [2024-06-11 03:55:25.471855] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.223 [2024-06-11 03:55:25.471862] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.223 [2024-06-11 03:55:25.474776] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.223 [2024-06-11 03:55:25.484614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.223 [2024-06-11 03:55:25.485096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.223 [2024-06-11 03:55:25.485113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.223 [2024-06-11 03:55:25.485121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.223 [2024-06-11 03:55:25.485329] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.223 [2024-06-11 03:55:25.485538] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.223 [2024-06-11 03:55:25.485548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.223 [2024-06-11 03:55:25.485555] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.223 [2024-06-11 03:55:25.488876] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.223 [2024-06-11 03:55:25.497985] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.223 [2024-06-11 03:55:25.498455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.223 [2024-06-11 03:55:25.498499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.223 [2024-06-11 03:55:25.498521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.223 [2024-06-11 03:55:25.499114] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.223 [2024-06-11 03:55:25.499645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.223 [2024-06-11 03:55:25.499654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.223 [2024-06-11 03:55:25.499661] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.223 [2024-06-11 03:55:25.502509] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.223 [2024-06-11 03:55:25.510943] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.223 [2024-06-11 03:55:25.511332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.223 [2024-06-11 03:55:25.511348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.223 [2024-06-11 03:55:25.511355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.223 [2024-06-11 03:55:25.511528] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.223 [2024-06-11 03:55:25.511702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.223 [2024-06-11 03:55:25.511710] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.223 [2024-06-11 03:55:25.511716] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.223 [2024-06-11 03:55:25.514412] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.223 [2024-06-11 03:55:25.523792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.223 [2024-06-11 03:55:25.524202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.223 [2024-06-11 03:55:25.524218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.223 [2024-06-11 03:55:25.524225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.223 [2024-06-11 03:55:25.524404] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.223 [2024-06-11 03:55:25.524572] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.223 [2024-06-11 03:55:25.524580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.223 [2024-06-11 03:55:25.524586] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.223 [2024-06-11 03:55:25.527224] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.223 [2024-06-11 03:55:25.536571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.223 [2024-06-11 03:55:25.537050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.223 [2024-06-11 03:55:25.537066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.223 [2024-06-11 03:55:25.537073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.223 [2024-06-11 03:55:25.537239] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.223 [2024-06-11 03:55:25.537405] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.223 [2024-06-11 03:55:25.537414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.223 [2024-06-11 03:55:25.537419] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.223 [2024-06-11 03:55:25.540029] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.223 [2024-06-11 03:55:25.549412] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.223 [2024-06-11 03:55:25.549869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.223 [2024-06-11 03:55:25.549884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.223 [2024-06-11 03:55:25.549891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.223 [2024-06-11 03:55:25.550065] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.223 [2024-06-11 03:55:25.550231] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.223 [2024-06-11 03:55:25.550239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.223 [2024-06-11 03:55:25.550246] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.223 [2024-06-11 03:55:25.552849] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.223 [2024-06-11 03:55:25.562250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.223 [2024-06-11 03:55:25.562739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.223 [2024-06-11 03:55:25.562754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.223 [2024-06-11 03:55:25.562761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.223 [2024-06-11 03:55:25.562929] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.223 [2024-06-11 03:55:25.563105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.223 [2024-06-11 03:55:25.563114] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.223 [2024-06-11 03:55:25.563119] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.224 [2024-06-11 03:55:25.565778] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.224 [2024-06-11 03:55:25.575163] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.224 [2024-06-11 03:55:25.575546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.224 [2024-06-11 03:55:25.575589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.224 [2024-06-11 03:55:25.575610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.224 [2024-06-11 03:55:25.576200] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.224 [2024-06-11 03:55:25.576762] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.224 [2024-06-11 03:55:25.576772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.224 [2024-06-11 03:55:25.576778] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.224 [2024-06-11 03:55:25.579416] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.224 [2024-06-11 03:55:25.588059] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.224 [2024-06-11 03:55:25.588386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.224 [2024-06-11 03:55:25.588429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.224 [2024-06-11 03:55:25.588450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.224 [2024-06-11 03:55:25.588959] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.224 [2024-06-11 03:55:25.589130] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.224 [2024-06-11 03:55:25.589139] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.224 [2024-06-11 03:55:25.589145] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.224 [2024-06-11 03:55:25.591796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.224 [2024-06-11 03:55:25.601026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.224 [2024-06-11 03:55:25.601463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.224 [2024-06-11 03:55:25.601504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.224 [2024-06-11 03:55:25.601534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.224 [2024-06-11 03:55:25.601961] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.224 [2024-06-11 03:55:25.602133] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.224 [2024-06-11 03:55:25.602141] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.224 [2024-06-11 03:55:25.602147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.224 [2024-06-11 03:55:25.604870] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.224 [2024-06-11 03:55:25.614037] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.224 [2024-06-11 03:55:25.614499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.224 [2024-06-11 03:55:25.614541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.224 [2024-06-11 03:55:25.614563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.224 [2024-06-11 03:55:25.615093] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.224 [2024-06-11 03:55:25.615265] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.224 [2024-06-11 03:55:25.615273] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.224 [2024-06-11 03:55:25.615280] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.224 [2024-06-11 03:55:25.617978] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.483 [2024-06-11 03:55:25.627004] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.483 [2024-06-11 03:55:25.627572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.483 [2024-06-11 03:55:25.627614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.483 [2024-06-11 03:55:25.627636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.483 [2024-06-11 03:55:25.628228] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.483 [2024-06-11 03:55:25.628810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.483 [2024-06-11 03:55:25.628834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.483 [2024-06-11 03:55:25.628855] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.483 [2024-06-11 03:55:25.631674] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.483 [2024-06-11 03:55:25.639974] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.483 [2024-06-11 03:55:25.640423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.483 [2024-06-11 03:55:25.640438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.483 [2024-06-11 03:55:25.640445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.484 [2024-06-11 03:55:25.640611] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.484 [2024-06-11 03:55:25.640777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.484 [2024-06-11 03:55:25.640788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.484 [2024-06-11 03:55:25.640793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.484 [2024-06-11 03:55:25.643405] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.484 [2024-06-11 03:55:25.652810] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.484 [2024-06-11 03:55:25.653192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.484 [2024-06-11 03:55:25.653235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.484 [2024-06-11 03:55:25.653256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.484 [2024-06-11 03:55:25.653851] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.484 [2024-06-11 03:55:25.654028] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.484 [2024-06-11 03:55:25.654037] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.484 [2024-06-11 03:55:25.654043] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.484 [2024-06-11 03:55:25.656700] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.484 [2024-06-11 03:55:25.665536] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.484 [2024-06-11 03:55:25.665973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.484 [2024-06-11 03:55:25.666031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.484 [2024-06-11 03:55:25.666055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.484 [2024-06-11 03:55:25.666645] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.484 [2024-06-11 03:55:25.666812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.484 [2024-06-11 03:55:25.666820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.484 [2024-06-11 03:55:25.666826] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.484 [2024-06-11 03:55:25.669437] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.484 [2024-06-11 03:55:25.678338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.484 [2024-06-11 03:55:25.678687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.484 [2024-06-11 03:55:25.678701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.484 [2024-06-11 03:55:25.678708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.484 [2024-06-11 03:55:25.678866] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.484 [2024-06-11 03:55:25.679030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.484 [2024-06-11 03:55:25.679055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.484 [2024-06-11 03:55:25.679061] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.484 [2024-06-11 03:55:25.681661] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.484 [2024-06-11 03:55:25.691160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.484 [2024-06-11 03:55:25.691606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.484 [2024-06-11 03:55:25.691649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.484 [2024-06-11 03:55:25.691671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.484 [2024-06-11 03:55:25.692200] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.484 [2024-06-11 03:55:25.692367] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.484 [2024-06-11 03:55:25.692375] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.484 [2024-06-11 03:55:25.692381] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.484 [2024-06-11 03:55:25.694979] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.484 [2024-06-11 03:55:25.703958] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.484 [2024-06-11 03:55:25.704428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.484 [2024-06-11 03:55:25.704471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.484 [2024-06-11 03:55:25.704493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.484 [2024-06-11 03:55:25.705042] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.484 [2024-06-11 03:55:25.705209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.484 [2024-06-11 03:55:25.705218] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.484 [2024-06-11 03:55:25.705224] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.484 [2024-06-11 03:55:25.707829] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.484 [2024-06-11 03:55:25.716843] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.484 [2024-06-11 03:55:25.717286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.484 [2024-06-11 03:55:25.717302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.484 [2024-06-11 03:55:25.717309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.484 [2024-06-11 03:55:25.717475] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.484 [2024-06-11 03:55:25.717642] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.484 [2024-06-11 03:55:25.717650] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.484 [2024-06-11 03:55:25.717656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.484 [2024-06-11 03:55:25.720334] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.484 [2024-06-11 03:55:25.729694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.484 [2024-06-11 03:55:25.730197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.484 [2024-06-11 03:55:25.730240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.484 [2024-06-11 03:55:25.730262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.484 [2024-06-11 03:55:25.730857] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.484 [2024-06-11 03:55:25.731136] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.484 [2024-06-11 03:55:25.731144] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.484 [2024-06-11 03:55:25.731150] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.484 [2024-06-11 03:55:25.733752] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.484 [2024-06-11 03:55:25.742502] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.484 [2024-06-11 03:55:25.742942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.484 [2024-06-11 03:55:25.742984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.484 [2024-06-11 03:55:25.743006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.484 [2024-06-11 03:55:25.743597] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.484 [2024-06-11 03:55:25.744186] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.484 [2024-06-11 03:55:25.744221] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.484 [2024-06-11 03:55:25.744228] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.484 [2024-06-11 03:55:25.746884] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.484 [2024-06-11 03:55:25.755386] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.484 [2024-06-11 03:55:25.755857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.484 [2024-06-11 03:55:25.755871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.484 [2024-06-11 03:55:25.755878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.484 [2024-06-11 03:55:25.756050] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.484 [2024-06-11 03:55:25.756217] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.484 [2024-06-11 03:55:25.756225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.484 [2024-06-11 03:55:25.756231] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.484 [2024-06-11 03:55:25.758833] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.484 [2024-06-11 03:55:25.768250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.484 [2024-06-11 03:55:25.768645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.484 [2024-06-11 03:55:25.768660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.485 [2024-06-11 03:55:25.768666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.485 [2024-06-11 03:55:25.768833] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.485 [2024-06-11 03:55:25.769000] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.485 [2024-06-11 03:55:25.769008] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.485 [2024-06-11 03:55:25.769024] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.485 [2024-06-11 03:55:25.771681] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.485 [2024-06-11 03:55:25.781097] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.485 [2024-06-11 03:55:25.781570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.485 [2024-06-11 03:55:25.781612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.485 [2024-06-11 03:55:25.781634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.485 [2024-06-11 03:55:25.782130] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.485 [2024-06-11 03:55:25.782297] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.485 [2024-06-11 03:55:25.782305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.485 [2024-06-11 03:55:25.782311] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.485 [2024-06-11 03:55:25.784912] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.485 [2024-06-11 03:55:25.793888] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.485 [2024-06-11 03:55:25.794363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.485 [2024-06-11 03:55:25.794379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.485 [2024-06-11 03:55:25.794386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.485 [2024-06-11 03:55:25.794553] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.485 [2024-06-11 03:55:25.794719] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.485 [2024-06-11 03:55:25.794727] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.485 [2024-06-11 03:55:25.794733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.485 [2024-06-11 03:55:25.797349] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.485 [2024-06-11 03:55:25.806623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.485 [2024-06-11 03:55:25.807112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.485 [2024-06-11 03:55:25.807155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.485 [2024-06-11 03:55:25.807177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.485 [2024-06-11 03:55:25.807758] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.485 [2024-06-11 03:55:25.808350] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.485 [2024-06-11 03:55:25.808376] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.485 [2024-06-11 03:55:25.808396] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.485 [2024-06-11 03:55:25.811023] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.485 [2024-06-11 03:55:25.819382] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.485 [2024-06-11 03:55:25.819826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.485 [2024-06-11 03:55:25.819844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.485 [2024-06-11 03:55:25.819850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.485 [2024-06-11 03:55:25.820055] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.485 [2024-06-11 03:55:25.820223] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.485 [2024-06-11 03:55:25.820231] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.485 [2024-06-11 03:55:25.820237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.485 [2024-06-11 03:55:25.822840] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.485 [2024-06-11 03:55:25.832229] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.485 [2024-06-11 03:55:25.832698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.485 [2024-06-11 03:55:25.832714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.485 [2024-06-11 03:55:25.832721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.485 [2024-06-11 03:55:25.832887] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.485 [2024-06-11 03:55:25.833060] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.485 [2024-06-11 03:55:25.833069] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.485 [2024-06-11 03:55:25.833075] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.485 [2024-06-11 03:55:25.835675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.485 [2024-06-11 03:55:25.845066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.485 [2024-06-11 03:55:25.845533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.485 [2024-06-11 03:55:25.845548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.485 [2024-06-11 03:55:25.845555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.485 [2024-06-11 03:55:25.845722] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.485 [2024-06-11 03:55:25.845887] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.485 [2024-06-11 03:55:25.845895] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.485 [2024-06-11 03:55:25.845900] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.485 [2024-06-11 03:55:25.848659] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.485 [2024-06-11 03:55:25.857989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.485 [2024-06-11 03:55:25.858482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.485 [2024-06-11 03:55:25.858525] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.485 [2024-06-11 03:55:25.858547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.485 [2024-06-11 03:55:25.859138] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.485 [2024-06-11 03:55:25.859644] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.485 [2024-06-11 03:55:25.859652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.485 [2024-06-11 03:55:25.859659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.485 [2024-06-11 03:55:25.862356] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.485 [2024-06-11 03:55:25.870892] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.485 [2024-06-11 03:55:25.871368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.485 [2024-06-11 03:55:25.871384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.485 [2024-06-11 03:55:25.871390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.485 [2024-06-11 03:55:25.871557] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.485 [2024-06-11 03:55:25.871723] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.485 [2024-06-11 03:55:25.871731] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.485 [2024-06-11 03:55:25.871737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.485 [2024-06-11 03:55:25.874458] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.485 [2024-06-11 03:55:25.883853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.485 [2024-06-11 03:55:25.884319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.485 [2024-06-11 03:55:25.884362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.485 [2024-06-11 03:55:25.884384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.485 [2024-06-11 03:55:25.884801] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.485 [2024-06-11 03:55:25.884973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.485 [2024-06-11 03:55:25.884981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.485 [2024-06-11 03:55:25.884987] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.746 [2024-06-11 03:55:25.887758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.746 [2024-06-11 03:55:25.896793] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.746 [2024-06-11 03:55:25.897239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.746 [2024-06-11 03:55:25.897254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.746 [2024-06-11 03:55:25.897260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.746 [2024-06-11 03:55:25.897428] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.746 [2024-06-11 03:55:25.897595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.746 [2024-06-11 03:55:25.897602] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.746 [2024-06-11 03:55:25.897608] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.746 [2024-06-11 03:55:25.900265] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.746 [2024-06-11 03:55:25.909536] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.746 [2024-06-11 03:55:25.909985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.746 [2024-06-11 03:55:25.910000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.746 [2024-06-11 03:55:25.910006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.746 [2024-06-11 03:55:25.910194] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.746 [2024-06-11 03:55:25.910360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.746 [2024-06-11 03:55:25.910368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.746 [2024-06-11 03:55:25.910374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.746 [2024-06-11 03:55:25.913052] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.746 [2024-06-11 03:55:25.922255] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.746 [2024-06-11 03:55:25.922723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.746 [2024-06-11 03:55:25.922764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.746 [2024-06-11 03:55:25.922786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.746 [2024-06-11 03:55:25.923336] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.746 [2024-06-11 03:55:25.923505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.746 [2024-06-11 03:55:25.923513] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.746 [2024-06-11 03:55:25.923518] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.746 [2024-06-11 03:55:25.926173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.746 [2024-06-11 03:55:25.935088] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.746 [2024-06-11 03:55:25.935461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.746 [2024-06-11 03:55:25.935475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.746 [2024-06-11 03:55:25.935482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.746 [2024-06-11 03:55:25.935641] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.746 [2024-06-11 03:55:25.935798] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.746 [2024-06-11 03:55:25.935806] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.746 [2024-06-11 03:55:25.935812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.746 [2024-06-11 03:55:25.938421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.746 [2024-06-11 03:55:25.947796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.746 [2024-06-11 03:55:25.948274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.746 [2024-06-11 03:55:25.948317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.746 [2024-06-11 03:55:25.948346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.746 [2024-06-11 03:55:25.948746] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.746 [2024-06-11 03:55:25.948905] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.746 [2024-06-11 03:55:25.948912] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.746 [2024-06-11 03:55:25.948918] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.746 [2024-06-11 03:55:25.951534] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.746 [2024-06-11 03:55:25.960683] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.746 [2024-06-11 03:55:25.961153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.746 [2024-06-11 03:55:25.961199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.746 [2024-06-11 03:55:25.961221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.746 [2024-06-11 03:55:25.961793] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.746 [2024-06-11 03:55:25.961952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.746 [2024-06-11 03:55:25.961960] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.746 [2024-06-11 03:55:25.961965] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.746 [2024-06-11 03:55:25.964584] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.746 [2024-06-11 03:55:25.973402] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.746 [2024-06-11 03:55:25.973801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.746 [2024-06-11 03:55:25.973843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.746 [2024-06-11 03:55:25.973864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.746 [2024-06-11 03:55:25.974458] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.746 [2024-06-11 03:55:25.974900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.746 [2024-06-11 03:55:25.974908] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.746 [2024-06-11 03:55:25.974914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.746 [2024-06-11 03:55:25.977515] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.746 [2024-06-11 03:55:25.986242] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.746 [2024-06-11 03:55:25.986663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.746 [2024-06-11 03:55:25.986678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.746 [2024-06-11 03:55:25.986684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.746 [2024-06-11 03:55:25.986842] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.746 [2024-06-11 03:55:25.987000] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.746 [2024-06-11 03:55:25.987015] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.746 [2024-06-11 03:55:25.987022] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.746 [2024-06-11 03:55:25.989637] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.746 [2024-06-11 03:55:25.999047] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.746 [2024-06-11 03:55:25.999436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.746 [2024-06-11 03:55:25.999477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.746 [2024-06-11 03:55:25.999499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.746 [2024-06-11 03:55:26.000005] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.746 [2024-06-11 03:55:26.000192] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.746 [2024-06-11 03:55:26.000200] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.746 [2024-06-11 03:55:26.000206] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.747 [2024-06-11 03:55:26.002809] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.747 [2024-06-11 03:55:26.011885] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.747 [2024-06-11 03:55:26.012340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.747 [2024-06-11 03:55:26.012356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.747 [2024-06-11 03:55:26.012363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.747 [2024-06-11 03:55:26.012529] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.747 [2024-06-11 03:55:26.012695] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.747 [2024-06-11 03:55:26.012703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.747 [2024-06-11 03:55:26.012708] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.747 [2024-06-11 03:55:26.015355] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.747 [2024-06-11 03:55:26.024629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.747 [2024-06-11 03:55:26.025026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.747 [2024-06-11 03:55:26.025043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.747 [2024-06-11 03:55:26.025049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.747 [2024-06-11 03:55:26.025215] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.747 [2024-06-11 03:55:26.025382] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.747 [2024-06-11 03:55:26.025390] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.747 [2024-06-11 03:55:26.025396] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.747 [2024-06-11 03:55:26.028064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.747 [2024-06-11 03:55:26.037463] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.747 [2024-06-11 03:55:26.037907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.747 [2024-06-11 03:55:26.037954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.747 [2024-06-11 03:55:26.037976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.747 [2024-06-11 03:55:26.038572] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.747 [2024-06-11 03:55:26.038890] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.747 [2024-06-11 03:55:26.038903] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.747 [2024-06-11 03:55:26.038912] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.747 [2024-06-11 03:55:26.043358] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.747 [2024-06-11 03:55:26.051106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.747 [2024-06-11 03:55:26.051575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.747 [2024-06-11 03:55:26.051617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.747 [2024-06-11 03:55:26.051638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.747 [2024-06-11 03:55:26.052232] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.747 [2024-06-11 03:55:26.052825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.747 [2024-06-11 03:55:26.052834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.747 [2024-06-11 03:55:26.052840] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.747 [2024-06-11 03:55:26.055758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.747 [2024-06-11 03:55:26.063853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.747 [2024-06-11 03:55:26.064322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.747 [2024-06-11 03:55:26.064338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.747 [2024-06-11 03:55:26.064345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.747 [2024-06-11 03:55:26.064512] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.747 [2024-06-11 03:55:26.064678] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.747 [2024-06-11 03:55:26.064686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.747 [2024-06-11 03:55:26.064691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.747 [2024-06-11 03:55:26.067311] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.747 [2024-06-11 03:55:26.076669] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.747 [2024-06-11 03:55:26.077040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.747 [2024-06-11 03:55:26.077055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.747 [2024-06-11 03:55:26.077064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.747 [2024-06-11 03:55:26.077222] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.747 [2024-06-11 03:55:26.077379] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.747 [2024-06-11 03:55:26.077387] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.747 [2024-06-11 03:55:26.077392] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.747 [2024-06-11 03:55:26.080007] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.747 [2024-06-11 03:55:26.089425] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.747 [2024-06-11 03:55:26.089865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.747 [2024-06-11 03:55:26.089881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.747 [2024-06-11 03:55:26.089887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.747 [2024-06-11 03:55:26.090066] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.747 [2024-06-11 03:55:26.090233] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.747 [2024-06-11 03:55:26.090241] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.747 [2024-06-11 03:55:26.090247] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.747 [2024-06-11 03:55:26.092948] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.747 [2024-06-11 03:55:26.102415] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.747 [2024-06-11 03:55:26.102894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.747 [2024-06-11 03:55:26.102909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.747 [2024-06-11 03:55:26.102916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.747 [2024-06-11 03:55:26.103094] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.747 [2024-06-11 03:55:26.103266] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.747 [2024-06-11 03:55:26.103275] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.747 [2024-06-11 03:55:26.103281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.747 [2024-06-11 03:55:26.106028] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.747 [2024-06-11 03:55:26.115496] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.747 [2024-06-11 03:55:26.115960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.747 [2024-06-11 03:55:26.115977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.747 [2024-06-11 03:55:26.115984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.747 [2024-06-11 03:55:26.116162] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.747 [2024-06-11 03:55:26.116342] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.747 [2024-06-11 03:55:26.116353] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.747 [2024-06-11 03:55:26.116360] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.747 [2024-06-11 03:55:26.119021] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.747 [2024-06-11 03:55:26.128391] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.747 [2024-06-11 03:55:26.128866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.747 [2024-06-11 03:55:26.128909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.747 [2024-06-11 03:55:26.128930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.747 [2024-06-11 03:55:26.129527] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.747 [2024-06-11 03:55:26.130047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.747 [2024-06-11 03:55:26.130056] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.747 [2024-06-11 03:55:26.130062] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.747 [2024-06-11 03:55:26.134211] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:44.747 [2024-06-11 03:55:26.142408] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:44.748 [2024-06-11 03:55:26.142885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:44.748 [2024-06-11 03:55:26.142928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:44.748 [2024-06-11 03:55:26.142949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:44.748 [2024-06-11 03:55:26.143546] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:44.748 [2024-06-11 03:55:26.144123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:44.748 [2024-06-11 03:55:26.144132] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:44.748 [2024-06-11 03:55:26.144138] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:44.748 [2024-06-11 03:55:26.147055] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.008 [2024-06-11 03:55:26.155341] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.008 [2024-06-11 03:55:26.155814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.008 [2024-06-11 03:55:26.155857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.008 [2024-06-11 03:55:26.155889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.008 [2024-06-11 03:55:26.156410] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.008 [2024-06-11 03:55:26.156578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.008 [2024-06-11 03:55:26.156586] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.008 [2024-06-11 03:55:26.156592] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.008 [2024-06-11 03:55:26.159263] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.008 [2024-06-11 03:55:26.168175] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.008 [2024-06-11 03:55:26.168637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.008 [2024-06-11 03:55:26.168677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.008 [2024-06-11 03:55:26.168699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.008 [2024-06-11 03:55:26.169216] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.008 [2024-06-11 03:55:26.169384] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.008 [2024-06-11 03:55:26.169392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.008 [2024-06-11 03:55:26.169398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.008 [2024-06-11 03:55:26.171999] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.008 [2024-06-11 03:55:26.180923] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.008 [2024-06-11 03:55:26.181395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.008 [2024-06-11 03:55:26.181442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.008 [2024-06-11 03:55:26.181464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.008 [2024-06-11 03:55:26.182055] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.008 [2024-06-11 03:55:26.182241] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.008 [2024-06-11 03:55:26.182249] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.008 [2024-06-11 03:55:26.182255] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.008 [2024-06-11 03:55:26.184856] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.008 [2024-06-11 03:55:26.193777] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.008 [2024-06-11 03:55:26.194248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.008 [2024-06-11 03:55:26.194264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.008 [2024-06-11 03:55:26.194270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.008 [2024-06-11 03:55:26.194438] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.008 [2024-06-11 03:55:26.194604] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.008 [2024-06-11 03:55:26.194612] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.008 [2024-06-11 03:55:26.194617] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.008 [2024-06-11 03:55:26.197242] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.008 [2024-06-11 03:55:26.206505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.008 [2024-06-11 03:55:26.206873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.008 [2024-06-11 03:55:26.206888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.008 [2024-06-11 03:55:26.206894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.008 [2024-06-11 03:55:26.207077] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.008 [2024-06-11 03:55:26.207244] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.008 [2024-06-11 03:55:26.207252] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.008 [2024-06-11 03:55:26.207257] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.008 [2024-06-11 03:55:26.209860] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.008 [2024-06-11 03:55:26.219328] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.008 [2024-06-11 03:55:26.219712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.008 [2024-06-11 03:55:26.219728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.008 [2024-06-11 03:55:26.219735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.008 [2024-06-11 03:55:26.219902] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.008 [2024-06-11 03:55:26.220078] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.008 [2024-06-11 03:55:26.220086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.008 [2024-06-11 03:55:26.220092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.008 [2024-06-11 03:55:26.222696] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.008 [2024-06-11 03:55:26.232099] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.008 [2024-06-11 03:55:26.232569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.008 [2024-06-11 03:55:26.232584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.008 [2024-06-11 03:55:26.232590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.008 [2024-06-11 03:55:26.232758] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.008 [2024-06-11 03:55:26.232924] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.008 [2024-06-11 03:55:26.232931] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.008 [2024-06-11 03:55:26.232937] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.008 [2024-06-11 03:55:26.235544] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.008 [2024-06-11 03:55:26.244809] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.009 [2024-06-11 03:55:26.245247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.009 [2024-06-11 03:55:26.245262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.009 [2024-06-11 03:55:26.245268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.009 [2024-06-11 03:55:26.245426] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.009 [2024-06-11 03:55:26.245583] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.009 [2024-06-11 03:55:26.245590] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.009 [2024-06-11 03:55:26.245599] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.009 [2024-06-11 03:55:26.248217] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.009 [2024-06-11 03:55:26.257723] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.009 [2024-06-11 03:55:26.258204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.009 [2024-06-11 03:55:26.258248] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.009 [2024-06-11 03:55:26.258269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.009 [2024-06-11 03:55:26.258686] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.009 [2024-06-11 03:55:26.258845] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.009 [2024-06-11 03:55:26.258852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.009 [2024-06-11 03:55:26.258858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.009 [2024-06-11 03:55:26.261545] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.009 [2024-06-11 03:55:26.270516] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.009 [2024-06-11 03:55:26.270975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.009 [2024-06-11 03:55:26.270990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.009 [2024-06-11 03:55:26.270997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.009 [2024-06-11 03:55:26.271170] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.009 [2024-06-11 03:55:26.271337] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.009 [2024-06-11 03:55:26.271344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.009 [2024-06-11 03:55:26.271350] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.009 [2024-06-11 03:55:26.273950] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.009 [2024-06-11 03:55:26.283254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.009 [2024-06-11 03:55:26.283692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.009 [2024-06-11 03:55:26.283706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.009 [2024-06-11 03:55:26.283713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.009 [2024-06-11 03:55:26.283870] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.009 [2024-06-11 03:55:26.284033] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.009 [2024-06-11 03:55:26.284057] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.009 [2024-06-11 03:55:26.284063] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.009 [2024-06-11 03:55:26.286656] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.009 [2024-06-11 03:55:26.296030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.009 [2024-06-11 03:55:26.296494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.009 [2024-06-11 03:55:26.296543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.009 [2024-06-11 03:55:26.296566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.009 [2024-06-11 03:55:26.297158] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.009 [2024-06-11 03:55:26.297672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.009 [2024-06-11 03:55:26.297680] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.009 [2024-06-11 03:55:26.297686] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.009 [2024-06-11 03:55:26.300288] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.009 [2024-06-11 03:55:26.308751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.009 [2024-06-11 03:55:26.309199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.009 [2024-06-11 03:55:26.309215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.009 [2024-06-11 03:55:26.309221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.009 [2024-06-11 03:55:26.309379] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.009 [2024-06-11 03:55:26.309537] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.009 [2024-06-11 03:55:26.309544] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.009 [2024-06-11 03:55:26.309550] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.009 [2024-06-11 03:55:26.312167] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.009 [2024-06-11 03:55:26.321592] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.009 [2024-06-11 03:55:26.321983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.009 [2024-06-11 03:55:26.321998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.009 [2024-06-11 03:55:26.322005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.009 [2024-06-11 03:55:26.322178] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.009 [2024-06-11 03:55:26.322345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.009 [2024-06-11 03:55:26.322352] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.009 [2024-06-11 03:55:26.322358] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.009 [2024-06-11 03:55:26.324959] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.009 [2024-06-11 03:55:26.334381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.009 [2024-06-11 03:55:26.334746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.009 [2024-06-11 03:55:26.334760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.009 [2024-06-11 03:55:26.334767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.009 [2024-06-11 03:55:26.334924] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.009 [2024-06-11 03:55:26.335110] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.009 [2024-06-11 03:55:26.335119] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.009 [2024-06-11 03:55:26.335125] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.009 [2024-06-11 03:55:26.337724] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.009 [2024-06-11 03:55:26.347176] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.009 [2024-06-11 03:55:26.347622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.009 [2024-06-11 03:55:26.347637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.009 [2024-06-11 03:55:26.347643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.009 [2024-06-11 03:55:26.347801] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.009 [2024-06-11 03:55:26.347959] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.009 [2024-06-11 03:55:26.347966] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.009 [2024-06-11 03:55:26.347972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.009 [2024-06-11 03:55:26.350590] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.009 [2024-06-11 03:55:26.359951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.009 [2024-06-11 03:55:26.360427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.009 [2024-06-11 03:55:26.360443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.009 [2024-06-11 03:55:26.360450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.009 [2024-06-11 03:55:26.360621] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.009 [2024-06-11 03:55:26.360792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.009 [2024-06-11 03:55:26.360800] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.009 [2024-06-11 03:55:26.360806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.009 [2024-06-11 03:55:26.363577] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.009 [2024-06-11 03:55:26.372905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.009 [2024-06-11 03:55:26.373356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.009 [2024-06-11 03:55:26.373372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.009 [2024-06-11 03:55:26.373379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.010 [2024-06-11 03:55:26.373550] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.010 [2024-06-11 03:55:26.373721] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.010 [2024-06-11 03:55:26.373729] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.010 [2024-06-11 03:55:26.373735] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.010 [2024-06-11 03:55:26.376428] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.010 [2024-06-11 03:55:26.385753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.010 [2024-06-11 03:55:26.386100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.010 [2024-06-11 03:55:26.386115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.010 [2024-06-11 03:55:26.386122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.010 [2024-06-11 03:55:26.386289] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.010 [2024-06-11 03:55:26.386457] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.010 [2024-06-11 03:55:26.386464] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.010 [2024-06-11 03:55:26.386470] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.010 [2024-06-11 03:55:26.389181] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.010 [2024-06-11 03:55:26.398600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.010 [2024-06-11 03:55:26.399049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.010 [2024-06-11 03:55:26.399064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.010 [2024-06-11 03:55:26.399071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.010 [2024-06-11 03:55:26.399229] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.010 [2024-06-11 03:55:26.399387] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.010 [2024-06-11 03:55:26.399395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.010 [2024-06-11 03:55:26.399400] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.010 [2024-06-11 03:55:26.402018] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.270 [2024-06-11 03:55:26.411631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.270 [2024-06-11 03:55:26.412097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.270 [2024-06-11 03:55:26.412113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.270 [2024-06-11 03:55:26.412120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.270 [2024-06-11 03:55:26.412291] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.270 [2024-06-11 03:55:26.412462] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.270 [2024-06-11 03:55:26.412470] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.270 [2024-06-11 03:55:26.412476] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.270 [2024-06-11 03:55:26.415151] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.270 [2024-06-11 03:55:26.424443] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.270 [2024-06-11 03:55:26.424901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.270 [2024-06-11 03:55:26.424943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.270 [2024-06-11 03:55:26.424972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.270 [2024-06-11 03:55:26.425577] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.270 [2024-06-11 03:55:26.425929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.270 [2024-06-11 03:55:26.425937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.270 [2024-06-11 03:55:26.425943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.270 [2024-06-11 03:55:26.428547] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.270 [2024-06-11 03:55:26.437288] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.270 [2024-06-11 03:55:26.437674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.270 [2024-06-11 03:55:26.437689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.270 [2024-06-11 03:55:26.437696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.270 [2024-06-11 03:55:26.437863] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.270 [2024-06-11 03:55:26.438035] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.270 [2024-06-11 03:55:26.438044] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.270 [2024-06-11 03:55:26.438050] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.270 [2024-06-11 03:55:26.440647] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.270 [2024-06-11 03:55:26.450052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.270 [2024-06-11 03:55:26.450498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.270 [2024-06-11 03:55:26.450528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.270 [2024-06-11 03:55:26.450552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.270 [2024-06-11 03:55:26.451143] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.270 [2024-06-11 03:55:26.451724] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.270 [2024-06-11 03:55:26.451749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.270 [2024-06-11 03:55:26.451769] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.270 [2024-06-11 03:55:26.454456] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.270 [2024-06-11 03:55:26.463130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.270 [2024-06-11 03:55:26.463472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.270 [2024-06-11 03:55:26.463491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.270 [2024-06-11 03:55:26.463499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.270 [2024-06-11 03:55:26.463708] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.270 [2024-06-11 03:55:26.463917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.270 [2024-06-11 03:55:26.463929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.270 [2024-06-11 03:55:26.463934] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.270 [2024-06-11 03:55:26.466546] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.270 [2024-06-11 03:55:26.476054] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.270 [2024-06-11 03:55:26.476431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.270 [2024-06-11 03:55:26.476446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.270 [2024-06-11 03:55:26.476452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.270 [2024-06-11 03:55:26.476612] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.270 [2024-06-11 03:55:26.476769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.270 [2024-06-11 03:55:26.476777] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.270 [2024-06-11 03:55:26.476783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.270 [2024-06-11 03:55:26.479341] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.270 [2024-06-11 03:55:26.488987] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.270 [2024-06-11 03:55:26.489458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.270 [2024-06-11 03:55:26.489500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.270 [2024-06-11 03:55:26.489522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.270 [2024-06-11 03:55:26.490023] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.270 [2024-06-11 03:55:26.490191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.270 [2024-06-11 03:55:26.490199] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.270 [2024-06-11 03:55:26.490205] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.270 [2024-06-11 03:55:26.492848] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.270 [2024-06-11 03:55:26.501976] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.270 [2024-06-11 03:55:26.502339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.270 [2024-06-11 03:55:26.502354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.270 [2024-06-11 03:55:26.502361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.270 [2024-06-11 03:55:26.502528] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.270 [2024-06-11 03:55:26.502695] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.270 [2024-06-11 03:55:26.502703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.270 [2024-06-11 03:55:26.502709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.270 [2024-06-11 03:55:26.505391] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.270 [2024-06-11 03:55:26.514905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.270 [2024-06-11 03:55:26.515314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.270 [2024-06-11 03:55:26.515330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.270 [2024-06-11 03:55:26.515337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.270 [2024-06-11 03:55:26.515504] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.270 [2024-06-11 03:55:26.515670] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.270 [2024-06-11 03:55:26.515678] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.270 [2024-06-11 03:55:26.515684] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.270 [2024-06-11 03:55:26.518357] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.270 [2024-06-11 03:55:26.527817] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.270 [2024-06-11 03:55:26.528267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.271 [2024-06-11 03:55:26.528283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.271 [2024-06-11 03:55:26.528290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.271 [2024-06-11 03:55:26.528457] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.271 [2024-06-11 03:55:26.528623] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.271 [2024-06-11 03:55:26.528631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.271 [2024-06-11 03:55:26.528637] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.271 [2024-06-11 03:55:26.531285] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.271 [2024-06-11 03:55:26.540630] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.271 [2024-06-11 03:55:26.541052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.271 [2024-06-11 03:55:26.541090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.271 [2024-06-11 03:55:26.541113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.271 [2024-06-11 03:55:26.541693] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.271 [2024-06-11 03:55:26.542201] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.271 [2024-06-11 03:55:26.542210] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.271 [2024-06-11 03:55:26.542215] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.271 [2024-06-11 03:55:26.544814] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.271 [2024-06-11 03:55:26.553523] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.271 [2024-06-11 03:55:26.553877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.271 [2024-06-11 03:55:26.553918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.271 [2024-06-11 03:55:26.553940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.271 [2024-06-11 03:55:26.554546] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.271 [2024-06-11 03:55:26.555150] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.271 [2024-06-11 03:55:26.555158] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.271 [2024-06-11 03:55:26.555164] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.271 [2024-06-11 03:55:26.557767] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.271 [2024-06-11 03:55:26.566300] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.271 [2024-06-11 03:55:26.566689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.271 [2024-06-11 03:55:26.566704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.271 [2024-06-11 03:55:26.566711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.271 [2024-06-11 03:55:26.566877] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.271 [2024-06-11 03:55:26.567050] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.271 [2024-06-11 03:55:26.567058] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.271 [2024-06-11 03:55:26.567064] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.271 [2024-06-11 03:55:26.569666] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.271 [2024-06-11 03:55:26.579189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.271 [2024-06-11 03:55:26.579641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.271 [2024-06-11 03:55:26.579656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.271 [2024-06-11 03:55:26.579663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.271 [2024-06-11 03:55:26.579829] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.271 [2024-06-11 03:55:26.579999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.271 [2024-06-11 03:55:26.580007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.271 [2024-06-11 03:55:26.580018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.271 [2024-06-11 03:55:26.582620] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.271 [2024-06-11 03:55:26.592097] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.271 [2024-06-11 03:55:26.592588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.271 [2024-06-11 03:55:26.592606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.271 [2024-06-11 03:55:26.592612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.271 [2024-06-11 03:55:26.592779] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.271 [2024-06-11 03:55:26.592947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.271 [2024-06-11 03:55:26.592954] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.271 [2024-06-11 03:55:26.592964] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.271 [2024-06-11 03:55:26.595571] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.271 [2024-06-11 03:55:26.604906] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.271 [2024-06-11 03:55:26.605300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.271 [2024-06-11 03:55:26.605316] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.271 [2024-06-11 03:55:26.605322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.271 [2024-06-11 03:55:26.605490] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.271 [2024-06-11 03:55:26.605656] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.271 [2024-06-11 03:55:26.605664] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.271 [2024-06-11 03:55:26.605670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.271 [2024-06-11 03:55:26.608281] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.271 [2024-06-11 03:55:26.617795] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.271 [2024-06-11 03:55:26.618257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.271 [2024-06-11 03:55:26.618274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.271 [2024-06-11 03:55:26.618281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.271 [2024-06-11 03:55:26.618453] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.271 [2024-06-11 03:55:26.618629] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.271 [2024-06-11 03:55:26.618637] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.271 [2024-06-11 03:55:26.618643] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.271 [2024-06-11 03:55:26.621392] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.271 [2024-06-11 03:55:26.630739] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.271 [2024-06-11 03:55:26.631211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.271 [2024-06-11 03:55:26.631254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.271 [2024-06-11 03:55:26.631276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.271 [2024-06-11 03:55:26.631858] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.271 [2024-06-11 03:55:26.632347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.271 [2024-06-11 03:55:26.632356] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.271 [2024-06-11 03:55:26.632361] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.271 [2024-06-11 03:55:26.635030] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.271 [2024-06-11 03:55:26.643668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.271 [2024-06-11 03:55:26.644061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.271 [2024-06-11 03:55:26.644076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.271 [2024-06-11 03:55:26.644083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.271 [2024-06-11 03:55:26.644250] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.271 [2024-06-11 03:55:26.644416] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.271 [2024-06-11 03:55:26.644424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.271 [2024-06-11 03:55:26.644430] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.271 [2024-06-11 03:55:26.647100] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.271 [2024-06-11 03:55:26.656716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.271 [2024-06-11 03:55:26.657035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.271 [2024-06-11 03:55:26.657051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.272 [2024-06-11 03:55:26.657058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.272 [2024-06-11 03:55:26.657233] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.272 [2024-06-11 03:55:26.657392] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.272 [2024-06-11 03:55:26.657401] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.272 [2024-06-11 03:55:26.657407] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.272 [2024-06-11 03:55:26.659994] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.272 [2024-06-11 03:55:26.669848] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.272 [2024-06-11 03:55:26.670180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.272 [2024-06-11 03:55:26.670196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.272 [2024-06-11 03:55:26.670203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.272 [2024-06-11 03:55:26.670375] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.272 [2024-06-11 03:55:26.670547] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.272 [2024-06-11 03:55:26.670556] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.272 [2024-06-11 03:55:26.670562] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.532 [2024-06-11 03:55:26.673315] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.532 [2024-06-11 03:55:26.682727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.532 [2024-06-11 03:55:26.683118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.532 [2024-06-11 03:55:26.683134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.532 [2024-06-11 03:55:26.683140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.532 [2024-06-11 03:55:26.683310] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.532 [2024-06-11 03:55:26.683476] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.532 [2024-06-11 03:55:26.683484] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.532 [2024-06-11 03:55:26.683491] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.532 [2024-06-11 03:55:26.686096] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.532 [2024-06-11 03:55:26.695584] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.532 [2024-06-11 03:55:26.695982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.532 [2024-06-11 03:55:26.695996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.532 [2024-06-11 03:55:26.696003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.532 [2024-06-11 03:55:26.696176] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.532 [2024-06-11 03:55:26.696343] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.532 [2024-06-11 03:55:26.696351] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.532 [2024-06-11 03:55:26.696357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.532 [2024-06-11 03:55:26.698960] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.532 [2024-06-11 03:55:26.708308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.532 [2024-06-11 03:55:26.708630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.532 [2024-06-11 03:55:26.708646] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.532 [2024-06-11 03:55:26.708652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.532 [2024-06-11 03:55:26.708820] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.532 [2024-06-11 03:55:26.708989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.532 [2024-06-11 03:55:26.708998] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.532 [2024-06-11 03:55:26.709004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.532 [2024-06-11 03:55:26.711614] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.532 [2024-06-11 03:55:26.721042] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.532 [2024-06-11 03:55:26.721414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.532 [2024-06-11 03:55:26.721429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.532 [2024-06-11 03:55:26.721436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.532 [2024-06-11 03:55:26.721603] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.532 [2024-06-11 03:55:26.721769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.532 [2024-06-11 03:55:26.721777] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.532 [2024-06-11 03:55:26.721786] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.533 [2024-06-11 03:55:26.724452] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.533 [2024-06-11 03:55:26.733798] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.533 [2024-06-11 03:55:26.734264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.533 [2024-06-11 03:55:26.734280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.533 [2024-06-11 03:55:26.734315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.533 [2024-06-11 03:55:26.734894] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.533 [2024-06-11 03:55:26.735486] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.533 [2024-06-11 03:55:26.735514] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.533 [2024-06-11 03:55:26.735534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.533 [2024-06-11 03:55:26.739975] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.533 [2024-06-11 03:55:26.747753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.533 [2024-06-11 03:55:26.748135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.533 [2024-06-11 03:55:26.748178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.533 [2024-06-11 03:55:26.748200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.533 [2024-06-11 03:55:26.748665] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.533 [2024-06-11 03:55:26.748848] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.533 [2024-06-11 03:55:26.748857] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.533 [2024-06-11 03:55:26.748863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.533 [2024-06-11 03:55:26.751780] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.533 [2024-06-11 03:55:26.760635] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.533 [2024-06-11 03:55:26.760996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.533 [2024-06-11 03:55:26.761018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.533 [2024-06-11 03:55:26.761026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.533 [2024-06-11 03:55:26.761193] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.533 [2024-06-11 03:55:26.761360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.533 [2024-06-11 03:55:26.761368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.533 [2024-06-11 03:55:26.761374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.533 [2024-06-11 03:55:26.763977] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.533 [2024-06-11 03:55:26.773472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.533 [2024-06-11 03:55:26.773818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.533 [2024-06-11 03:55:26.773837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.533 [2024-06-11 03:55:26.773843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.533 [2024-06-11 03:55:26.774017] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.533 [2024-06-11 03:55:26.774184] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.533 [2024-06-11 03:55:26.774192] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.533 [2024-06-11 03:55:26.774198] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.533 [2024-06-11 03:55:26.776802] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.533 [2024-06-11 03:55:26.786220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.533 [2024-06-11 03:55:26.786600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.533 [2024-06-11 03:55:26.786615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.533 [2024-06-11 03:55:26.786621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.533 [2024-06-11 03:55:26.786788] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.533 [2024-06-11 03:55:26.786955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.533 [2024-06-11 03:55:26.786963] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.533 [2024-06-11 03:55:26.786969] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.533 [2024-06-11 03:55:26.789577] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.533 [2024-06-11 03:55:26.799067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.533 [2024-06-11 03:55:26.799398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.533 [2024-06-11 03:55:26.799412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.533 [2024-06-11 03:55:26.799418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.533 [2024-06-11 03:55:26.799577] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.533 [2024-06-11 03:55:26.799734] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.533 [2024-06-11 03:55:26.799741] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.533 [2024-06-11 03:55:26.799747] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.533 [2024-06-11 03:55:26.802356] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.533 [2024-06-11 03:55:26.811851] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.533 [2024-06-11 03:55:26.812232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.533 [2024-06-11 03:55:26.812274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.533 [2024-06-11 03:55:26.812295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.533 [2024-06-11 03:55:26.812754] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.533 [2024-06-11 03:55:26.812923] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.533 [2024-06-11 03:55:26.812931] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.533 [2024-06-11 03:55:26.812938] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.533 [2024-06-11 03:55:26.815558] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.533 [2024-06-11 03:55:26.824600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.533 [2024-06-11 03:55:26.824953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.533 [2024-06-11 03:55:26.824995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.533 [2024-06-11 03:55:26.825032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.533 [2024-06-11 03:55:26.825603] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.533 [2024-06-11 03:55:26.825771] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.533 [2024-06-11 03:55:26.825779] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.533 [2024-06-11 03:55:26.825785] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.533 [2024-06-11 03:55:26.828462] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.533 [2024-06-11 03:55:26.837354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.533 [2024-06-11 03:55:26.837791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.533 [2024-06-11 03:55:26.837834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.533 [2024-06-11 03:55:26.837855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.533 [2024-06-11 03:55:26.838454] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.533 [2024-06-11 03:55:26.838926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.533 [2024-06-11 03:55:26.838937] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.533 [2024-06-11 03:55:26.838945] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.533 [2024-06-11 03:55:26.841541] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.533 [2024-06-11 03:55:26.850083] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.533 [2024-06-11 03:55:26.850530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.533 [2024-06-11 03:55:26.850545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.533 [2024-06-11 03:55:26.850551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.533 [2024-06-11 03:55:26.850718] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.533 [2024-06-11 03:55:26.850888] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.533 [2024-06-11 03:55:26.850897] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.533 [2024-06-11 03:55:26.850903] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.533 [2024-06-11 03:55:26.853514] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.533 [2024-06-11 03:55:26.862852] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.533 [2024-06-11 03:55:26.863172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.534 [2024-06-11 03:55:26.863188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.534 [2024-06-11 03:55:26.863195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.534 [2024-06-11 03:55:26.863362] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.534 [2024-06-11 03:55:26.863528] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.534 [2024-06-11 03:55:26.863536] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.534 [2024-06-11 03:55:26.863542] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.534 [2024-06-11 03:55:26.866148] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.534 [2024-06-11 03:55:26.875637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.534 [2024-06-11 03:55:26.876003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.534 [2024-06-11 03:55:26.876026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.534 [2024-06-11 03:55:26.876033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.534 [2024-06-11 03:55:26.876204] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.534 [2024-06-11 03:55:26.876377] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.534 [2024-06-11 03:55:26.876385] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.534 [2024-06-11 03:55:26.876391] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.534 [2024-06-11 03:55:26.879142] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.534 [2024-06-11 03:55:26.888623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.534 [2024-06-11 03:55:26.889007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.534 [2024-06-11 03:55:26.889029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.534 [2024-06-11 03:55:26.889036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.534 [2024-06-11 03:55:26.889207] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.534 [2024-06-11 03:55:26.889385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.534 [2024-06-11 03:55:26.889393] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.534 [2024-06-11 03:55:26.889399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.534 [2024-06-11 03:55:26.892065] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.534 [2024-06-11 03:55:26.901503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.534 [2024-06-11 03:55:26.901937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.534 [2024-06-11 03:55:26.901976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.534 [2024-06-11 03:55:26.902006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.534 [2024-06-11 03:55:26.902599] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.534 [2024-06-11 03:55:26.903191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.534 [2024-06-11 03:55:26.903200] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.534 [2024-06-11 03:55:26.903206] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.534 [2024-06-11 03:55:26.905869] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.534 [2024-06-11 03:55:26.914368] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.534 [2024-06-11 03:55:26.914671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.534 [2024-06-11 03:55:26.914686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.534 [2024-06-11 03:55:26.914693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.534 [2024-06-11 03:55:26.914861] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.534 [2024-06-11 03:55:26.915034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.534 [2024-06-11 03:55:26.915043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.534 [2024-06-11 03:55:26.915049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.534 [2024-06-11 03:55:26.917639] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.534 [2024-06-11 03:55:26.927139] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.534 [2024-06-11 03:55:26.927580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.534 [2024-06-11 03:55:26.927595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.534 [2024-06-11 03:55:26.927601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.534 [2024-06-11 03:55:26.927768] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.534 [2024-06-11 03:55:26.927934] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.534 [2024-06-11 03:55:26.927942] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.534 [2024-06-11 03:55:26.927948] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.534 [2024-06-11 03:55:26.930580] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.794 [2024-06-11 03:55:26.940094] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.794 [2024-06-11 03:55:26.940438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.794 [2024-06-11 03:55:26.940454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.794 [2024-06-11 03:55:26.940461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.794 [2024-06-11 03:55:26.940633] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.794 [2024-06-11 03:55:26.940804] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.794 [2024-06-11 03:55:26.940815] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.794 [2024-06-11 03:55:26.940822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.794 [2024-06-11 03:55:26.943531] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.794 [2024-06-11 03:55:26.952812] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.794 [2024-06-11 03:55:26.953259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.794 [2024-06-11 03:55:26.953274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.794 [2024-06-11 03:55:26.953281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.794 [2024-06-11 03:55:26.953447] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.794 [2024-06-11 03:55:26.953613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.794 [2024-06-11 03:55:26.953620] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.794 [2024-06-11 03:55:26.953626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.794 [2024-06-11 03:55:26.956281] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.794 [2024-06-11 03:55:26.965551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.794 [2024-06-11 03:55:26.965993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.794 [2024-06-11 03:55:26.966014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.794 [2024-06-11 03:55:26.966021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.794 [2024-06-11 03:55:26.966188] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.794 [2024-06-11 03:55:26.966354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.794 [2024-06-11 03:55:26.966362] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.794 [2024-06-11 03:55:26.966368] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.794 [2024-06-11 03:55:26.968971] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.794 [2024-06-11 03:55:26.978360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.794 [2024-06-11 03:55:26.978664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.794 [2024-06-11 03:55:26.978679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.794 [2024-06-11 03:55:26.978686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.794 [2024-06-11 03:55:26.978852] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.794 [2024-06-11 03:55:26.979024] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.794 [2024-06-11 03:55:26.979032] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.794 [2024-06-11 03:55:26.979038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.794 [2024-06-11 03:55:26.981710] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.794 [2024-06-11 03:55:26.991190] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.794 [2024-06-11 03:55:26.991638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.794 [2024-06-11 03:55:26.991654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.794 [2024-06-11 03:55:26.991661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.794 [2024-06-11 03:55:26.991828] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.794 [2024-06-11 03:55:26.991995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.794 [2024-06-11 03:55:26.992002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.794 [2024-06-11 03:55:26.992008] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.794 [2024-06-11 03:55:26.994619] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.794 [2024-06-11 03:55:27.003956] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.794 [2024-06-11 03:55:27.004287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.794 [2024-06-11 03:55:27.004330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.794 [2024-06-11 03:55:27.004351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.795 [2024-06-11 03:55:27.004932] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.795 [2024-06-11 03:55:27.005465] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.795 [2024-06-11 03:55:27.005474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.795 [2024-06-11 03:55:27.005479] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.795 [2024-06-11 03:55:27.008093] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.795 [2024-06-11 03:55:27.016777] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.795 [2024-06-11 03:55:27.017156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.795 [2024-06-11 03:55:27.017172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.795 [2024-06-11 03:55:27.017179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.795 [2024-06-11 03:55:27.017352] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.795 [2024-06-11 03:55:27.017511] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.795 [2024-06-11 03:55:27.017518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.795 [2024-06-11 03:55:27.017524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.795 [2024-06-11 03:55:27.020121] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.795 [2024-06-11 03:55:27.029610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.795 [2024-06-11 03:55:27.029923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.795 [2024-06-11 03:55:27.029968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.795 [2024-06-11 03:55:27.029989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.795 [2024-06-11 03:55:27.030596] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.795 [2024-06-11 03:55:27.031189] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.795 [2024-06-11 03:55:27.031215] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.795 [2024-06-11 03:55:27.031236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.795 [2024-06-11 03:55:27.033861] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.795 [2024-06-11 03:55:27.042390] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.795 [2024-06-11 03:55:27.042705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.795 [2024-06-11 03:55:27.042720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.795 [2024-06-11 03:55:27.042726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.795 [2024-06-11 03:55:27.042892] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.795 [2024-06-11 03:55:27.043066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.795 [2024-06-11 03:55:27.043074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.795 [2024-06-11 03:55:27.043080] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.795 [2024-06-11 03:55:27.045681] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.795 [2024-06-11 03:55:27.055168] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.795 [2024-06-11 03:55:27.055563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.795 [2024-06-11 03:55:27.055578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.795 [2024-06-11 03:55:27.055584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.795 [2024-06-11 03:55:27.055751] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.795 [2024-06-11 03:55:27.055922] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.795 [2024-06-11 03:55:27.055930] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.795 [2024-06-11 03:55:27.055935] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.795 [2024-06-11 03:55:27.058549] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.795 [2024-06-11 03:55:27.067976] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.795 [2024-06-11 03:55:27.068432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.795 [2024-06-11 03:55:27.068447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.795 [2024-06-11 03:55:27.068453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.795 [2024-06-11 03:55:27.068612] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.795 [2024-06-11 03:55:27.068769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.795 [2024-06-11 03:55:27.068777] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.795 [2024-06-11 03:55:27.068785] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.795 [2024-06-11 03:55:27.071396] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.795 [2024-06-11 03:55:27.080730] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.795 [2024-06-11 03:55:27.081146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.795 [2024-06-11 03:55:27.081173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.795 [2024-06-11 03:55:27.081179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.795 [2024-06-11 03:55:27.081337] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.795 [2024-06-11 03:55:27.081495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.795 [2024-06-11 03:55:27.081502] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.795 [2024-06-11 03:55:27.081507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.795 [2024-06-11 03:55:27.084107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.795 [2024-06-11 03:55:27.093665] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.795 [2024-06-11 03:55:27.094079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.795 [2024-06-11 03:55:27.094094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.795 [2024-06-11 03:55:27.094100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.795 [2024-06-11 03:55:27.094258] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.795 [2024-06-11 03:55:27.094415] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.795 [2024-06-11 03:55:27.094422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.795 [2024-06-11 03:55:27.094428] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.795 [2024-06-11 03:55:27.097078] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.795 [2024-06-11 03:55:27.106600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.795 [2024-06-11 03:55:27.107020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.795 [2024-06-11 03:55:27.107034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.795 [2024-06-11 03:55:27.107040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.795 [2024-06-11 03:55:27.107201] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.795 [2024-06-11 03:55:27.107359] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.795 [2024-06-11 03:55:27.107367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.795 [2024-06-11 03:55:27.107372] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.795 [2024-06-11 03:55:27.109959] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.795 [2024-06-11 03:55:27.119372] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.795 [2024-06-11 03:55:27.119829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.795 [2024-06-11 03:55:27.119870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.795 [2024-06-11 03:55:27.119892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.795 [2024-06-11 03:55:27.120487] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.796 [2024-06-11 03:55:27.120842] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.796 [2024-06-11 03:55:27.120850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.796 [2024-06-11 03:55:27.120856] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.796 [2024-06-11 03:55:27.123458] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.796 [2024-06-11 03:55:27.132189] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.796 [2024-06-11 03:55:27.132637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.796 [2024-06-11 03:55:27.132653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.796 [2024-06-11 03:55:27.132660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.796 [2024-06-11 03:55:27.132831] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.796 [2024-06-11 03:55:27.133002] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.796 [2024-06-11 03:55:27.133017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.796 [2024-06-11 03:55:27.133023] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.796 [2024-06-11 03:55:27.135860] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.796 [2024-06-11 03:55:27.145185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.796 [2024-06-11 03:55:27.145662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.796 [2024-06-11 03:55:27.145705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.796 [2024-06-11 03:55:27.145726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.796 [2024-06-11 03:55:27.146319] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.796 [2024-06-11 03:55:27.146730] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.796 [2024-06-11 03:55:27.146738] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.796 [2024-06-11 03:55:27.146744] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.796 [2024-06-11 03:55:27.149408] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.796 [2024-06-11 03:55:27.158063] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.796 [2024-06-11 03:55:27.158430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.796 [2024-06-11 03:55:27.158471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.796 [2024-06-11 03:55:27.158493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.796 [2024-06-11 03:55:27.159087] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.796 [2024-06-11 03:55:27.159658] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.796 [2024-06-11 03:55:27.159670] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.796 [2024-06-11 03:55:27.159679] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.796 [2024-06-11 03:55:27.164135] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.796 [2024-06-11 03:55:27.171875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.796 [2024-06-11 03:55:27.172325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.796 [2024-06-11 03:55:27.172374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.796 [2024-06-11 03:55:27.172395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.796 [2024-06-11 03:55:27.172974] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.796 [2024-06-11 03:55:27.173570] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.796 [2024-06-11 03:55:27.173596] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.796 [2024-06-11 03:55:27.173616] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.796 [2024-06-11 03:55:27.176549] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:45.796 [2024-06-11 03:55:27.184590] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:45.796 [2024-06-11 03:55:27.185037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:45.796 [2024-06-11 03:55:27.185052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:45.796 [2024-06-11 03:55:27.185059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:45.796 [2024-06-11 03:55:27.185225] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:45.796 [2024-06-11 03:55:27.185391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:45.796 [2024-06-11 03:55:27.185400] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:45.796 [2024-06-11 03:55:27.185405] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:45.796 [2024-06-11 03:55:27.188016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.056 [2024-06-11 03:55:27.197584] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.056 [2024-06-11 03:55:27.198020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.056 [2024-06-11 03:55:27.198036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.056 [2024-06-11 03:55:27.198043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.056 [2024-06-11 03:55:27.198214] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.056 [2024-06-11 03:55:27.198396] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.056 [2024-06-11 03:55:27.198404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.056 [2024-06-11 03:55:27.198410] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.056 [2024-06-11 03:55:27.201064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.056 [2024-06-11 03:55:27.210390] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.056 [2024-06-11 03:55:27.210840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.056 [2024-06-11 03:55:27.210882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.056 [2024-06-11 03:55:27.210904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.056 [2024-06-11 03:55:27.211498] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.056 [2024-06-11 03:55:27.211969] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.056 [2024-06-11 03:55:27.211977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.056 [2024-06-11 03:55:27.211983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.056 [2024-06-11 03:55:27.214645] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.056 [2024-06-11 03:55:27.223214] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.056 [2024-06-11 03:55:27.223630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.056 [2024-06-11 03:55:27.223645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.056 [2024-06-11 03:55:27.223651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.056 [2024-06-11 03:55:27.223810] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.056 [2024-06-11 03:55:27.223968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.056 [2024-06-11 03:55:27.223975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.056 [2024-06-11 03:55:27.223980] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.056 [2024-06-11 03:55:27.226605] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.056 [2024-06-11 03:55:27.235931] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.056 [2024-06-11 03:55:27.236380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.056 [2024-06-11 03:55:27.236422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.056 [2024-06-11 03:55:27.236443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.056 [2024-06-11 03:55:27.236875] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.056 [2024-06-11 03:55:27.237047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.056 [2024-06-11 03:55:27.237056] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.056 [2024-06-11 03:55:27.237061] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.056 [2024-06-11 03:55:27.239663] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.056 [2024-06-11 03:55:27.248727] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.056 [2024-06-11 03:55:27.249182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.056 [2024-06-11 03:55:27.249201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.056 [2024-06-11 03:55:27.249208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.056 [2024-06-11 03:55:27.249376] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.057 [2024-06-11 03:55:27.249543] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.057 [2024-06-11 03:55:27.249551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.057 [2024-06-11 03:55:27.249557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.057 [2024-06-11 03:55:27.252164] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.057 [2024-06-11 03:55:27.261551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.057 [2024-06-11 03:55:27.261996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.057 [2024-06-11 03:55:27.262060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.057 [2024-06-11 03:55:27.262082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.057 [2024-06-11 03:55:27.262626] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.057 [2024-06-11 03:55:27.262794] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.057 [2024-06-11 03:55:27.262802] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.057 [2024-06-11 03:55:27.262808] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.057 [2024-06-11 03:55:27.265414] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.057 [2024-06-11 03:55:27.274302] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.057 [2024-06-11 03:55:27.274743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.057 [2024-06-11 03:55:27.274758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.057 [2024-06-11 03:55:27.274765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.057 [2024-06-11 03:55:27.274932] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.057 [2024-06-11 03:55:27.275104] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.057 [2024-06-11 03:55:27.275113] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.057 [2024-06-11 03:55:27.275119] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.057 [2024-06-11 03:55:27.277719] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.057 [2024-06-11 03:55:27.287046] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.057 [2024-06-11 03:55:27.287467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.057 [2024-06-11 03:55:27.287482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.057 [2024-06-11 03:55:27.287489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.057 [2024-06-11 03:55:27.287655] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.057 [2024-06-11 03:55:27.287825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.057 [2024-06-11 03:55:27.287833] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.057 [2024-06-11 03:55:27.287839] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.057 [2024-06-11 03:55:27.290447] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.057 [2024-06-11 03:55:27.299775] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.057 [2024-06-11 03:55:27.300216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.057 [2024-06-11 03:55:27.300267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.057 [2024-06-11 03:55:27.300289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.057 [2024-06-11 03:55:27.300805] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.057 [2024-06-11 03:55:27.300972] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.057 [2024-06-11 03:55:27.300980] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.057 [2024-06-11 03:55:27.300986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.057 [2024-06-11 03:55:27.303593] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.057 [2024-06-11 03:55:27.312622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.057 [2024-06-11 03:55:27.313054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.057 [2024-06-11 03:55:27.313097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.057 [2024-06-11 03:55:27.313118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.057 [2024-06-11 03:55:27.313699] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.057 [2024-06-11 03:55:27.314103] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.057 [2024-06-11 03:55:27.314111] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.057 [2024-06-11 03:55:27.314117] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.057 [2024-06-11 03:55:27.316719] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.057 [2024-06-11 03:55:27.325443] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.057 [2024-06-11 03:55:27.325891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.057 [2024-06-11 03:55:27.325933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.057 [2024-06-11 03:55:27.325955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.057 [2024-06-11 03:55:27.326561] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.057 [2024-06-11 03:55:27.327037] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.057 [2024-06-11 03:55:27.327045] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.057 [2024-06-11 03:55:27.327051] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.057 [2024-06-11 03:55:27.329649] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.057 [2024-06-11 03:55:27.338234] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.057 [2024-06-11 03:55:27.338652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.057 [2024-06-11 03:55:27.338667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.057 [2024-06-11 03:55:27.338674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.057 [2024-06-11 03:55:27.338840] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.057 [2024-06-11 03:55:27.339007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.057 [2024-06-11 03:55:27.339022] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.057 [2024-06-11 03:55:27.339028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.057 [2024-06-11 03:55:27.341628] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.057 [2024-06-11 03:55:27.351043] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.057 [2024-06-11 03:55:27.351478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.057 [2024-06-11 03:55:27.351493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.057 [2024-06-11 03:55:27.351500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.057 [2024-06-11 03:55:27.351668] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.057 [2024-06-11 03:55:27.351834] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.057 [2024-06-11 03:55:27.351842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.057 [2024-06-11 03:55:27.351848] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.057 [2024-06-11 03:55:27.354459] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.057 [2024-06-11 03:55:27.363825] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.057 [2024-06-11 03:55:27.364276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.057 [2024-06-11 03:55:27.364292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.057 [2024-06-11 03:55:27.364298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.057 [2024-06-11 03:55:27.364465] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.057 [2024-06-11 03:55:27.364630] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.057 [2024-06-11 03:55:27.364638] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.057 [2024-06-11 03:55:27.364644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.057 [2024-06-11 03:55:27.367252] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.057 [2024-06-11 03:55:27.376574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.057 [2024-06-11 03:55:27.376991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.057 [2024-06-11 03:55:27.377007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.058 [2024-06-11 03:55:27.377022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.058 [2024-06-11 03:55:27.377189] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.058 [2024-06-11 03:55:27.377355] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.058 [2024-06-11 03:55:27.377363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.058 [2024-06-11 03:55:27.377368] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.058 [2024-06-11 03:55:27.379968] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.058 [2024-06-11 03:55:27.389297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.058 [2024-06-11 03:55:27.389738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.058 [2024-06-11 03:55:27.389754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.058 [2024-06-11 03:55:27.389761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.058 [2024-06-11 03:55:27.389933] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.058 [2024-06-11 03:55:27.390112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.058 [2024-06-11 03:55:27.390120] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.058 [2024-06-11 03:55:27.390127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.058 [2024-06-11 03:55:27.392867] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.058 [2024-06-11 03:55:27.402190] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.058 [2024-06-11 03:55:27.402653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.058 [2024-06-11 03:55:27.402668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.058 [2024-06-11 03:55:27.402675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.058 [2024-06-11 03:55:27.402846] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.058 [2024-06-11 03:55:27.403023] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.058 [2024-06-11 03:55:27.403031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.058 [2024-06-11 03:55:27.403038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.058 [2024-06-11 03:55:27.405712] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.058 [2024-06-11 03:55:27.415115] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.058 [2024-06-11 03:55:27.415545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.058 [2024-06-11 03:55:27.415560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.058 [2024-06-11 03:55:27.415567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.058 [2024-06-11 03:55:27.415735] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.058 [2024-06-11 03:55:27.415900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.058 [2024-06-11 03:55:27.415911] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.058 [2024-06-11 03:55:27.415917] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.058 [2024-06-11 03:55:27.418524] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.058 [2024-06-11 03:55:27.427851] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.058 [2024-06-11 03:55:27.428305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.058 [2024-06-11 03:55:27.428350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.058 [2024-06-11 03:55:27.428373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.058 [2024-06-11 03:55:27.428951] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.058 [2024-06-11 03:55:27.429386] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.058 [2024-06-11 03:55:27.429395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.058 [2024-06-11 03:55:27.429401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.058 [2024-06-11 03:55:27.432001] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.058 [2024-06-11 03:55:27.440584] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.058 [2024-06-11 03:55:27.441032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.058 [2024-06-11 03:55:27.441075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.058 [2024-06-11 03:55:27.441097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.058 [2024-06-11 03:55:27.441678] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.058 [2024-06-11 03:55:27.442282] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.058 [2024-06-11 03:55:27.442290] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.058 [2024-06-11 03:55:27.442296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.058 [2024-06-11 03:55:27.444895] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.058 [2024-06-11 03:55:27.453421] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.058 [2024-06-11 03:55:27.453874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.058 [2024-06-11 03:55:27.453889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.058 [2024-06-11 03:55:27.453896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.058 [2024-06-11 03:55:27.454069] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.058 [2024-06-11 03:55:27.454235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.058 [2024-06-11 03:55:27.454243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.058 [2024-06-11 03:55:27.454248] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.058 [2024-06-11 03:55:27.457020] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.319 [2024-06-11 03:55:27.466501] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.319 [2024-06-11 03:55:27.466940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.319 [2024-06-11 03:55:27.466955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.319 [2024-06-11 03:55:27.466961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.319 [2024-06-11 03:55:27.467135] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.319 [2024-06-11 03:55:27.467301] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.319 [2024-06-11 03:55:27.467309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.319 [2024-06-11 03:55:27.467315] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.319 [2024-06-11 03:55:27.469983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.319 [2024-06-11 03:55:27.479315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.319 [2024-06-11 03:55:27.479698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.319 [2024-06-11 03:55:27.479714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.319 [2024-06-11 03:55:27.479720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.319 [2024-06-11 03:55:27.479886] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.319 [2024-06-11 03:55:27.480059] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.319 [2024-06-11 03:55:27.480068] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.319 [2024-06-11 03:55:27.480074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.319 [2024-06-11 03:55:27.482683] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.319 [2024-06-11 03:55:27.492047] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.319 [2024-06-11 03:55:27.492448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.319 [2024-06-11 03:55:27.492464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.319 [2024-06-11 03:55:27.492470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.319 [2024-06-11 03:55:27.492628] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.319 [2024-06-11 03:55:27.492786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.319 [2024-06-11 03:55:27.492793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.319 [2024-06-11 03:55:27.492799] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.319 [2024-06-11 03:55:27.495410] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.319 [2024-06-11 03:55:27.504767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.319 [2024-06-11 03:55:27.505194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.319 [2024-06-11 03:55:27.505237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.319 [2024-06-11 03:55:27.505259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.319 [2024-06-11 03:55:27.505846] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.319 [2024-06-11 03:55:27.506435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.319 [2024-06-11 03:55:27.506461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.319 [2024-06-11 03:55:27.506481] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.319 [2024-06-11 03:55:27.509115] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.319 [2024-06-11 03:55:27.517644] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.319 [2024-06-11 03:55:27.518095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.319 [2024-06-11 03:55:27.518138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.319 [2024-06-11 03:55:27.518160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.319 [2024-06-11 03:55:27.518541] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.319 [2024-06-11 03:55:27.518708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.319 [2024-06-11 03:55:27.518716] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.319 [2024-06-11 03:55:27.518722] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.319 [2024-06-11 03:55:27.521331] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.319 [2024-06-11 03:55:27.530414] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.319 [2024-06-11 03:55:27.530837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.319 [2024-06-11 03:55:27.530852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.319 [2024-06-11 03:55:27.530859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.319 [2024-06-11 03:55:27.531023] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.319 [2024-06-11 03:55:27.531205] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.319 [2024-06-11 03:55:27.531214] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.319 [2024-06-11 03:55:27.531220] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.319 [2024-06-11 03:55:27.533820] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.319 [2024-06-11 03:55:27.543239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.319 [2024-06-11 03:55:27.543683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.319 [2024-06-11 03:55:27.543698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.320 [2024-06-11 03:55:27.543705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.320 [2024-06-11 03:55:27.543872] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.320 [2024-06-11 03:55:27.544044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.320 [2024-06-11 03:55:27.544053] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.320 [2024-06-11 03:55:27.544062] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.320 [2024-06-11 03:55:27.546665] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.320 [2024-06-11 03:55:27.556062] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.320 [2024-06-11 03:55:27.556514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.320 [2024-06-11 03:55:27.556529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.320 [2024-06-11 03:55:27.556536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.320 [2024-06-11 03:55:27.556702] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.320 [2024-06-11 03:55:27.556869] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.320 [2024-06-11 03:55:27.556876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.320 [2024-06-11 03:55:27.556882] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.320 [2024-06-11 03:55:27.559493] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.320 [2024-06-11 03:55:27.568911] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.320 [2024-06-11 03:55:27.569357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.320 [2024-06-11 03:55:27.569373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.320 [2024-06-11 03:55:27.569380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.320 [2024-06-11 03:55:27.569547] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.320 [2024-06-11 03:55:27.569713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.320 [2024-06-11 03:55:27.569720] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.320 [2024-06-11 03:55:27.569727] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.320 [2024-06-11 03:55:27.572339] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.320 [2024-06-11 03:55:27.581669] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.320 [2024-06-11 03:55:27.582110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.320 [2024-06-11 03:55:27.582126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.320 [2024-06-11 03:55:27.582132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.320 [2024-06-11 03:55:27.582299] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.320 [2024-06-11 03:55:27.582464] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.320 [2024-06-11 03:55:27.582472] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.320 [2024-06-11 03:55:27.582478] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.320 [2024-06-11 03:55:27.585086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.320 [2024-06-11 03:55:27.594504] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.320 [2024-06-11 03:55:27.594905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.320 [2024-06-11 03:55:27.594923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.320 [2024-06-11 03:55:27.594929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.320 [2024-06-11 03:55:27.595101] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.320 [2024-06-11 03:55:27.595269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.320 [2024-06-11 03:55:27.595276] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.320 [2024-06-11 03:55:27.595282] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.320 [2024-06-11 03:55:27.597888] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.320 [2024-06-11 03:55:27.607235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.320 [2024-06-11 03:55:27.607673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.320 [2024-06-11 03:55:27.607715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.320 [2024-06-11 03:55:27.607738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.320 [2024-06-11 03:55:27.608339] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.320 [2024-06-11 03:55:27.608798] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.320 [2024-06-11 03:55:27.608806] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.320 [2024-06-11 03:55:27.608812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.320 [2024-06-11 03:55:27.611592] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.320 [2024-06-11 03:55:27.619986] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.320 [2024-06-11 03:55:27.620458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.320 [2024-06-11 03:55:27.620474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.320 [2024-06-11 03:55:27.620481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.320 [2024-06-11 03:55:27.620648] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.320 [2024-06-11 03:55:27.620814] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.320 [2024-06-11 03:55:27.620823] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.320 [2024-06-11 03:55:27.620829] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.320 [2024-06-11 03:55:27.623437] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.320 [2024-06-11 03:55:27.632783] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.320 [2024-06-11 03:55:27.633257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.320 [2024-06-11 03:55:27.633299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.320 [2024-06-11 03:55:27.633322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.320 [2024-06-11 03:55:27.633837] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.320 [2024-06-11 03:55:27.633998] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.320 [2024-06-11 03:55:27.634007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.320 [2024-06-11 03:55:27.634018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.320 [2024-06-11 03:55:27.636634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.320 [2024-06-11 03:55:27.645512] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.320 [2024-06-11 03:55:27.645966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.320 [2024-06-11 03:55:27.645981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.320 [2024-06-11 03:55:27.645988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.320 [2024-06-11 03:55:27.646181] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.320 [2024-06-11 03:55:27.646379] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.320 [2024-06-11 03:55:27.646387] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.320 [2024-06-11 03:55:27.646393] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.320 [2024-06-11 03:55:27.649134] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.320 [2024-06-11 03:55:27.658459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.320 [2024-06-11 03:55:27.658950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.320 [2024-06-11 03:55:27.658994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.320 [2024-06-11 03:55:27.659031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.320 [2024-06-11 03:55:27.659613] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.320 [2024-06-11 03:55:27.659799] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.321 [2024-06-11 03:55:27.659807] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.321 [2024-06-11 03:55:27.659813] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.321 [2024-06-11 03:55:27.662500] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.321 [2024-06-11 03:55:27.671339] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.321 [2024-06-11 03:55:27.671821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.321 [2024-06-11 03:55:27.671864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.321 [2024-06-11 03:55:27.671885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.321 [2024-06-11 03:55:27.672484] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.321 [2024-06-11 03:55:27.673027] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.321 [2024-06-11 03:55:27.673036] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.321 [2024-06-11 03:55:27.673042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.321 [2024-06-11 03:55:27.675648] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.321 [2024-06-11 03:55:27.684109] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.321 [2024-06-11 03:55:27.684588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.321 [2024-06-11 03:55:27.684631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.321 [2024-06-11 03:55:27.684652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.321 [2024-06-11 03:55:27.685086] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.321 [2024-06-11 03:55:27.685254] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.321 [2024-06-11 03:55:27.685262] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.321 [2024-06-11 03:55:27.685268] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.321 [2024-06-11 03:55:27.689557] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.321 [2024-06-11 03:55:27.697974] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.321 [2024-06-11 03:55:27.698423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.321 [2024-06-11 03:55:27.698439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.321 [2024-06-11 03:55:27.698446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.321 [2024-06-11 03:55:27.698629] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.321 [2024-06-11 03:55:27.698811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.321 [2024-06-11 03:55:27.698820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.321 [2024-06-11 03:55:27.698827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.321 [2024-06-11 03:55:27.701745] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.321 [2024-06-11 03:55:27.710808] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.321 [2024-06-11 03:55:27.711268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.321 [2024-06-11 03:55:27.711322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.321 [2024-06-11 03:55:27.711344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.321 [2024-06-11 03:55:27.711896] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.321 [2024-06-11 03:55:27.712066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.321 [2024-06-11 03:55:27.712074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.321 [2024-06-11 03:55:27.712081] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.321 [2024-06-11 03:55:27.714707] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.581 [2024-06-11 03:55:27.723921] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.581 [2024-06-11 03:55:27.724330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.581 [2024-06-11 03:55:27.724346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.581 [2024-06-11 03:55:27.724356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.581 [2024-06-11 03:55:27.724522] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.581 [2024-06-11 03:55:27.724689] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.581 [2024-06-11 03:55:27.724697] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.581 [2024-06-11 03:55:27.724703] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.581 [2024-06-11 03:55:27.727407] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.581 [2024-06-11 03:55:27.736645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.581 [2024-06-11 03:55:27.737099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.581 [2024-06-11 03:55:27.737143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.581 [2024-06-11 03:55:27.737165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.581 [2024-06-11 03:55:27.737746] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.581 [2024-06-11 03:55:27.737999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.581 [2024-06-11 03:55:27.738007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.581 [2024-06-11 03:55:27.738018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.582 [2024-06-11 03:55:27.740619] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.582 [2024-06-11 03:55:27.749362] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.582 [2024-06-11 03:55:27.749733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.582 [2024-06-11 03:55:27.749749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.582 [2024-06-11 03:55:27.749755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.582 [2024-06-11 03:55:27.749923] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.582 [2024-06-11 03:55:27.750096] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.582 [2024-06-11 03:55:27.750104] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.582 [2024-06-11 03:55:27.750110] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.582 [2024-06-11 03:55:27.752711] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.582 [2024-06-11 03:55:27.762361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.582 [2024-06-11 03:55:27.762809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.582 [2024-06-11 03:55:27.762825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.582 [2024-06-11 03:55:27.762831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.582 [2024-06-11 03:55:27.762998] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.582 [2024-06-11 03:55:27.763190] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.582 [2024-06-11 03:55:27.763207] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.582 [2024-06-11 03:55:27.763213] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.582 [2024-06-11 03:55:27.765915] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.582 [2024-06-11 03:55:27.775351] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.582 [2024-06-11 03:55:27.775803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.582 [2024-06-11 03:55:27.775846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.582 [2024-06-11 03:55:27.775868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.582 [2024-06-11 03:55:27.776360] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.582 [2024-06-11 03:55:27.776638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.582 [2024-06-11 03:55:27.776650] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.582 [2024-06-11 03:55:27.776660] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.582 [2024-06-11 03:55:27.781111] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.582 [2024-06-11 03:55:27.789033] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.582 [2024-06-11 03:55:27.789403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.582 [2024-06-11 03:55:27.789419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.582 [2024-06-11 03:55:27.789426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.582 [2024-06-11 03:55:27.789609] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.582 [2024-06-11 03:55:27.789792] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.582 [2024-06-11 03:55:27.789800] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.582 [2024-06-11 03:55:27.789807] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.582 [2024-06-11 03:55:27.792727] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.582 [2024-06-11 03:55:27.801750] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.582 [2024-06-11 03:55:27.802163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.582 [2024-06-11 03:55:27.802179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.582 [2024-06-11 03:55:27.802186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.582 [2024-06-11 03:55:27.802352] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.582 [2024-06-11 03:55:27.802519] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.582 [2024-06-11 03:55:27.802527] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.582 [2024-06-11 03:55:27.802532] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.582 [2024-06-11 03:55:27.805164] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.582 [2024-06-11 03:55:27.814835] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.582 [2024-06-11 03:55:27.815246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.582 [2024-06-11 03:55:27.815262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.582 [2024-06-11 03:55:27.815269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.582 [2024-06-11 03:55:27.815442] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.582 [2024-06-11 03:55:27.815613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.582 [2024-06-11 03:55:27.815621] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.582 [2024-06-11 03:55:27.815628] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.582 [2024-06-11 03:55:27.818379] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.582 [2024-06-11 03:55:27.827790] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.582 [2024-06-11 03:55:27.828251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.582 [2024-06-11 03:55:27.828294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.582 [2024-06-11 03:55:27.828316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.582 [2024-06-11 03:55:27.828896] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.582 [2024-06-11 03:55:27.829382] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.582 [2024-06-11 03:55:27.829391] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.582 [2024-06-11 03:55:27.829396] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.582 [2024-06-11 03:55:27.832107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.582 [2024-06-11 03:55:27.840627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.582 [2024-06-11 03:55:27.841071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.582 [2024-06-11 03:55:27.841086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.582 [2024-06-11 03:55:27.841093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.582 [2024-06-11 03:55:27.841261] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.582 [2024-06-11 03:55:27.841428] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.582 [2024-06-11 03:55:27.841435] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.582 [2024-06-11 03:55:27.841441] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.582 [2024-06-11 03:55:27.844051] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.582 [2024-06-11 03:55:27.853423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.582 [2024-06-11 03:55:27.853839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.582 [2024-06-11 03:55:27.853853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.582 [2024-06-11 03:55:27.853863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.582 [2024-06-11 03:55:27.854028] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.582 [2024-06-11 03:55:27.854210] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.582 [2024-06-11 03:55:27.854218] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.583 [2024-06-11 03:55:27.854224] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.583 [2024-06-11 03:55:27.856822] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.583 [2024-06-11 03:55:27.866240] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.583 [2024-06-11 03:55:27.866717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.583 [2024-06-11 03:55:27.866758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.583 [2024-06-11 03:55:27.866779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.583 [2024-06-11 03:55:27.867303] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.583 [2024-06-11 03:55:27.867578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.583 [2024-06-11 03:55:27.867591] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.583 [2024-06-11 03:55:27.867601] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.583 [2024-06-11 03:55:27.872055] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.583 [2024-06-11 03:55:27.879913] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.583 [2024-06-11 03:55:27.880402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.583 [2024-06-11 03:55:27.880444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.583 [2024-06-11 03:55:27.880466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.583 [2024-06-11 03:55:27.881061] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.583 [2024-06-11 03:55:27.881316] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.583 [2024-06-11 03:55:27.881324] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.583 [2024-06-11 03:55:27.881331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.583 [2024-06-11 03:55:27.884249] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.583 [2024-06-11 03:55:27.892711] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.583 [2024-06-11 03:55:27.893150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.583 [2024-06-11 03:55:27.893166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.583 [2024-06-11 03:55:27.893172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.583 [2024-06-11 03:55:27.893340] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.583 [2024-06-11 03:55:27.893509] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.583 [2024-06-11 03:55:27.893520] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.583 [2024-06-11 03:55:27.893526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.583 [2024-06-11 03:55:27.896135] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.583 [2024-06-11 03:55:27.905720] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.583 [2024-06-11 03:55:27.906139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.583 [2024-06-11 03:55:27.906182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.583 [2024-06-11 03:55:27.906203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.583 [2024-06-11 03:55:27.906661] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.583 [2024-06-11 03:55:27.906829] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.583 [2024-06-11 03:55:27.906836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.583 [2024-06-11 03:55:27.906842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.583 [2024-06-11 03:55:27.909506] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.583 [2024-06-11 03:55:27.918622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.583 [2024-06-11 03:55:27.919049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.583 [2024-06-11 03:55:27.919065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.583 [2024-06-11 03:55:27.919072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.583 [2024-06-11 03:55:27.919239] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.583 [2024-06-11 03:55:27.919405] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.583 [2024-06-11 03:55:27.919412] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.583 [2024-06-11 03:55:27.919418] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.583 [2024-06-11 03:55:27.922090] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.583 [2024-06-11 03:55:27.931473] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.583 [2024-06-11 03:55:27.931896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.583 [2024-06-11 03:55:27.931912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.583 [2024-06-11 03:55:27.931919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.583 [2024-06-11 03:55:27.932092] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.583 [2024-06-11 03:55:27.932259] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.583 [2024-06-11 03:55:27.932266] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.583 [2024-06-11 03:55:27.932272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.583 [2024-06-11 03:55:27.934875] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.583 [2024-06-11 03:55:27.944198] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.583 [2024-06-11 03:55:27.944663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.583 [2024-06-11 03:55:27.944704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.583 [2024-06-11 03:55:27.944726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.583 [2024-06-11 03:55:27.945228] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.583 [2024-06-11 03:55:27.945396] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.583 [2024-06-11 03:55:27.945403] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.583 [2024-06-11 03:55:27.945409] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.583 [2024-06-11 03:55:27.948014] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.583 [2024-06-11 03:55:27.957059] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.583 [2024-06-11 03:55:27.957493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.583 [2024-06-11 03:55:27.957535] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.583 [2024-06-11 03:55:27.957556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.583 [2024-06-11 03:55:27.958131] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.583 [2024-06-11 03:55:27.958412] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.583 [2024-06-11 03:55:27.958426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.583 [2024-06-11 03:55:27.958436] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.583 [2024-06-11 03:55:27.962880] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.583 [2024-06-11 03:55:27.970486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.583 [2024-06-11 03:55:27.970927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.583 [2024-06-11 03:55:27.970943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.583 [2024-06-11 03:55:27.970950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.583 [2024-06-11 03:55:27.971140] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.583 [2024-06-11 03:55:27.971329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.584 [2024-06-11 03:55:27.971338] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.584 [2024-06-11 03:55:27.971345] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.584 [2024-06-11 03:55:27.974259] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.584 [2024-06-11 03:55:27.983527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.584 [2024-06-11 03:55:27.983961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.584 [2024-06-11 03:55:27.983978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.584 [2024-06-11 03:55:27.983984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.584 [2024-06-11 03:55:27.984165] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.584 [2024-06-11 03:55:27.984337] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.584 [2024-06-11 03:55:27.984345] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.584 [2024-06-11 03:55:27.984351] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.844 [2024-06-11 03:55:27.987085] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.844 [2024-06-11 03:55:27.996361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.844 [2024-06-11 03:55:27.996728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.844 [2024-06-11 03:55:27.996770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.844 [2024-06-11 03:55:27.996791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.844 [2024-06-11 03:55:27.997385] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.844 [2024-06-11 03:55:27.997756] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.844 [2024-06-11 03:55:27.997765] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.844 [2024-06-11 03:55:27.997770] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.844 [2024-06-11 03:55:28.000373] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.844 [2024-06-11 03:55:28.009099] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.844 [2024-06-11 03:55:28.009526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.844 [2024-06-11 03:55:28.009567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.844 [2024-06-11 03:55:28.009589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.844 [2024-06-11 03:55:28.010184] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.844 [2024-06-11 03:55:28.010691] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.844 [2024-06-11 03:55:28.010699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.844 [2024-06-11 03:55:28.010705] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.844 [2024-06-11 03:55:28.013307] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.844 [2024-06-11 03:55:28.021832] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.844 [2024-06-11 03:55:28.022278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.844 [2024-06-11 03:55:28.022294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.844 [2024-06-11 03:55:28.022300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.844 [2024-06-11 03:55:28.022467] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.844 [2024-06-11 03:55:28.022633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.844 [2024-06-11 03:55:28.022641] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.844 [2024-06-11 03:55:28.022650] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.844 [2024-06-11 03:55:28.025258] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.844 [2024-06-11 03:55:28.034582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.844 [2024-06-11 03:55:28.035002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.845 [2024-06-11 03:55:28.035022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.845 [2024-06-11 03:55:28.035028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.845 [2024-06-11 03:55:28.035186] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.845 [2024-06-11 03:55:28.035344] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.845 [2024-06-11 03:55:28.035352] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.845 [2024-06-11 03:55:28.035357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.845 [2024-06-11 03:55:28.037881] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.845 [2024-06-11 03:55:28.047315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.845 [2024-06-11 03:55:28.047683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.845 [2024-06-11 03:55:28.047699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.845 [2024-06-11 03:55:28.047705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.845 [2024-06-11 03:55:28.047872] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.845 [2024-06-11 03:55:28.048043] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.845 [2024-06-11 03:55:28.048051] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.845 [2024-06-11 03:55:28.048057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.845 [2024-06-11 03:55:28.050665] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.845 [2024-06-11 03:55:28.060115] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.845 [2024-06-11 03:55:28.060606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.845 [2024-06-11 03:55:28.060650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.845 [2024-06-11 03:55:28.060672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.845 [2024-06-11 03:55:28.061264] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.845 [2024-06-11 03:55:28.061789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.845 [2024-06-11 03:55:28.061798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.845 [2024-06-11 03:55:28.061804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.845 [2024-06-11 03:55:28.064418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.845 [2024-06-11 03:55:28.072910] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.845 [2024-06-11 03:55:28.073306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.845 [2024-06-11 03:55:28.073326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.845 [2024-06-11 03:55:28.073332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.845 [2024-06-11 03:55:28.073499] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.845 [2024-06-11 03:55:28.073665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.845 [2024-06-11 03:55:28.073673] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.845 [2024-06-11 03:55:28.073679] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.845 [2024-06-11 03:55:28.076420] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2405920 Killed "${NVMF_APP[@]}" "$@" 00:58:46.845 03:55:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:58:46.845 03:55:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:58:46.845 03:55:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:58:46.845 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@723 -- # xtrace_disable 00:58:46.845 [2024-06-11 03:55:28.085915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.845 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:46.845 [2024-06-11 03:55:28.086302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.845 [2024-06-11 03:55:28.086319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.845 [2024-06-11 03:55:28.086326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.845 [2024-06-11 03:55:28.086497] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.845 [2024-06-11 03:55:28.086672] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.845 [2024-06-11 03:55:28.086681] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.845 [2024-06-11 03:55:28.086687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.845 [2024-06-11 03:55:28.089437] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.845 03:55:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2407122 00:58:46.845 03:55:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2407122 00:58:46.845 03:55:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:58:46.845 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # '[' -z 2407122 ']' 00:58:46.845 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:58:46.845 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local max_retries=100 00:58:46.845 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:58:46.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:58:46.845 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@839 -- # xtrace_disable 00:58:46.845 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:46.845 [2024-06-11 03:55:28.099030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.845 [2024-06-11 03:55:28.099405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.845 [2024-06-11 03:55:28.099420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.845 [2024-06-11 03:55:28.099430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.845 [2024-06-11 03:55:28.099600] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.845 [2024-06-11 03:55:28.099771] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.845 [2024-06-11 03:55:28.099779] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.845 [2024-06-11 03:55:28.099786] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.845 [2024-06-11 03:55:28.102554] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.845 [2024-06-11 03:55:28.111985] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.845 [2024-06-11 03:55:28.112363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.845 [2024-06-11 03:55:28.112380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.845 [2024-06-11 03:55:28.112387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.845 [2024-06-11 03:55:28.112558] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.845 [2024-06-11 03:55:28.112730] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.845 [2024-06-11 03:55:28.112738] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.845 [2024-06-11 03:55:28.112744] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.845 [2024-06-11 03:55:28.115500] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.845 [2024-06-11 03:55:28.125021] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.845 [2024-06-11 03:55:28.125341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.845 [2024-06-11 03:55:28.125358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.845 [2024-06-11 03:55:28.125365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.845 [2024-06-11 03:55:28.125538] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.845 [2024-06-11 03:55:28.125709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.845 [2024-06-11 03:55:28.125717] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.845 [2024-06-11 03:55:28.125723] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.845 [2024-06-11 03:55:28.128475] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.845 [2024-06-11 03:55:28.137992] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.845 [2024-06-11 03:55:28.138342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.846 [2024-06-11 03:55:28.138358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.846 [2024-06-11 03:55:28.138365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.846 [2024-06-11 03:55:28.138538] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.846 [2024-06-11 03:55:28.138714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.846 [2024-06-11 03:55:28.138723] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.846 [2024-06-11 03:55:28.138730] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.846 [2024-06-11 03:55:28.140659] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:58:46.846 [2024-06-11 03:55:28.140696] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:58:46.846 [2024-06-11 03:55:28.141486] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.846 [2024-06-11 03:55:28.151050] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.846 [2024-06-11 03:55:28.151485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.846 [2024-06-11 03:55:28.151501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.846 [2024-06-11 03:55:28.151507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.846 [2024-06-11 03:55:28.151680] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.846 [2024-06-11 03:55:28.151854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.846 [2024-06-11 03:55:28.151862] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.846 [2024-06-11 03:55:28.151868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.846 [2024-06-11 03:55:28.154616] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.846 [2024-06-11 03:55:28.164156] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.846 [2024-06-11 03:55:28.164542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.846 [2024-06-11 03:55:28.164559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.846 [2024-06-11 03:55:28.164566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.846 [2024-06-11 03:55:28.164739] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.846 [2024-06-11 03:55:28.164910] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.846 [2024-06-11 03:55:28.164919] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.846 [2024-06-11 03:55:28.164925] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.846 [2024-06-11 03:55:28.167645] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.846 EAL: No free 2048 kB hugepages reported on node 1 00:58:46.846 [2024-06-11 03:55:28.177163] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.846 [2024-06-11 03:55:28.177622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.846 [2024-06-11 03:55:28.177638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.846 [2024-06-11 03:55:28.177645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.846 [2024-06-11 03:55:28.177817] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.846 [2024-06-11 03:55:28.177989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.846 [2024-06-11 03:55:28.178000] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.846 [2024-06-11 03:55:28.178006] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.846 [2024-06-11 03:55:28.180760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.846 [2024-06-11 03:55:28.190173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.846 [2024-06-11 03:55:28.190607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.846 [2024-06-11 03:55:28.190623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.846 [2024-06-11 03:55:28.190630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.846 [2024-06-11 03:55:28.190802] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.846 [2024-06-11 03:55:28.190973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.846 [2024-06-11 03:55:28.190982] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.846 [2024-06-11 03:55:28.190988] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.846 [2024-06-11 03:55:28.193739] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.846 [2024-06-11 03:55:28.203139] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.846 [2024-06-11 03:55:28.203556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.846 [2024-06-11 03:55:28.203572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.846 [2024-06-11 03:55:28.203579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.846 [2024-06-11 03:55:28.203750] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.846 [2024-06-11 03:55:28.203921] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.846 [2024-06-11 03:55:28.203930] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.846 [2024-06-11 03:55:28.203936] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.846 [2024-06-11 03:55:28.204848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:58:46.846 [2024-06-11 03:55:28.206681] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.846 [2024-06-11 03:55:28.216087] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.846 [2024-06-11 03:55:28.216465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.846 [2024-06-11 03:55:28.216483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.846 [2024-06-11 03:55:28.216491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.846 [2024-06-11 03:55:28.216663] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.846 [2024-06-11 03:55:28.216836] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.846 [2024-06-11 03:55:28.216845] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.846 [2024-06-11 03:55:28.216852] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.846 [2024-06-11 03:55:28.219578] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.846 [2024-06-11 03:55:28.229109] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.846 [2024-06-11 03:55:28.229620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.846 [2024-06-11 03:55:28.229640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.846 [2024-06-11 03:55:28.229648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.846 [2024-06-11 03:55:28.229822] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.846 [2024-06-11 03:55:28.229995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.846 [2024-06-11 03:55:28.230005] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.846 [2024-06-11 03:55:28.230018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.846 [2024-06-11 03:55:28.232760] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.846 [2024-06-11 03:55:28.242017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:46.846 [2024-06-11 03:55:28.242406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:46.846 [2024-06-11 03:55:28.242422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:46.846 [2024-06-11 03:55:28.242430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:46.846 [2024-06-11 03:55:28.242603] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:46.846 [2024-06-11 03:55:28.242775] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:46.846 [2024-06-11 03:55:28.242783] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:46.846 [2024-06-11 03:55:28.242791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:46.846 [2024-06-11 03:55:28.245454] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:58:46.846 [2024-06-11 03:55:28.245481] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:58:46.846 [2024-06-11 03:55:28.245488] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:58:46.846 [2024-06-11 03:55:28.245495] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:58:46.846 [2024-06-11 03:55:28.245500] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:58:46.846 [2024-06-11 03:55:28.245547] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:46.846 [2024-06-11 03:55:28.245536] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:58:46.846 [2024-06-11 03:55:28.245632] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:58:46.846 [2024-06-11 03:55:28.245633] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:58:47.107 [2024-06-11 03:55:28.255127] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:47.107 [2024-06-11 03:55:28.255529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:47.107 [2024-06-11 03:55:28.255548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:47.107 [2024-06-11 03:55:28.255556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:47.107 [2024-06-11 03:55:28.255731] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:47.107 [2024-06-11 03:55:28.255908] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:47.107 [2024-06-11 03:55:28.255917] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:47.107 [2024-06-11 03:55:28.255924] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:47.107 [2024-06-11 03:55:28.258675] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:47.107 [2024-06-11 03:55:28.268089] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:47.107 [2024-06-11 03:55:28.268551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:47.107 [2024-06-11 03:55:28.268569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:47.107 [2024-06-11 03:55:28.268577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:47.107 [2024-06-11 03:55:28.268752] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:47.107 [2024-06-11 03:55:28.268926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:47.107 [2024-06-11 03:55:28.268935] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:47.107 [2024-06-11 03:55:28.268942] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:47.107 [2024-06-11 03:55:28.271692] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:47.107 [2024-06-11 03:55:28.281096] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:47.107 [2024-06-11 03:55:28.281553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:47.107 [2024-06-11 03:55:28.281572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:47.107 [2024-06-11 03:55:28.281580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:47.107 [2024-06-11 03:55:28.281754] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:47.107 [2024-06-11 03:55:28.281930] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:47.107 [2024-06-11 03:55:28.281939] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:47.107 [2024-06-11 03:55:28.281946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:47.107 [2024-06-11 03:55:28.284698] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:47.107 [2024-06-11 03:55:28.294111] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:47.107 [2024-06-11 03:55:28.294543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:47.107 [2024-06-11 03:55:28.294563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:47.107 [2024-06-11 03:55:28.294571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:47.107 [2024-06-11 03:55:28.294744] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:47.107 [2024-06-11 03:55:28.294915] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:47.107 [2024-06-11 03:55:28.294924] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:47.107 [2024-06-11 03:55:28.294931] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:47.107 [2024-06-11 03:55:28.297693] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:47.107 [2024-06-11 03:55:28.307100] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:47.107 [2024-06-11 03:55:28.307504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:47.107 [2024-06-11 03:55:28.307522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:47.107 [2024-06-11 03:55:28.307530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:47.107 [2024-06-11 03:55:28.307702] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:47.107 [2024-06-11 03:55:28.307875] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:47.107 [2024-06-11 03:55:28.307883] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:47.107 [2024-06-11 03:55:28.307890] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:47.107 [2024-06-11 03:55:28.310640] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:47.107 [2024-06-11 03:55:28.320205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:47.107 [2024-06-11 03:55:28.320573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:47.107 [2024-06-11 03:55:28.320589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:47.107 [2024-06-11 03:55:28.320597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:47.107 [2024-06-11 03:55:28.320770] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:47.107 [2024-06-11 03:55:28.320943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:47.107 [2024-06-11 03:55:28.320952] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:47.107 [2024-06-11 03:55:28.320959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:47.108 [2024-06-11 03:55:28.323708] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@863 -- # return 0 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@729 -- # xtrace_disable 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:47.108 [2024-06-11 03:55:28.333280] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:47.108 [2024-06-11 03:55:28.333642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:47.108 [2024-06-11 03:55:28.333658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:47.108 [2024-06-11 03:55:28.333665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:47.108 [2024-06-11 03:55:28.333836] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:47.108 [2024-06-11 03:55:28.334014] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:47.108 [2024-06-11 03:55:28.334023] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:47.108 [2024-06-11 03:55:28.334030] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:47.108 [2024-06-11 03:55:28.336774] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:47.108 [2024-06-11 03:55:28.346339] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:47.108 [2024-06-11 03:55:28.346706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:47.108 [2024-06-11 03:55:28.346722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:47.108 [2024-06-11 03:55:28.346729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:47.108 [2024-06-11 03:55:28.346900] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:47.108 [2024-06-11 03:55:28.347077] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:47.108 [2024-06-11 03:55:28.347086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:47.108 [2024-06-11 03:55:28.347092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:47.108 [2024-06-11 03:55:28.349833] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:47.108 [2024-06-11 03:55:28.359402] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:47.108 [2024-06-11 03:55:28.359788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:47.108 [2024-06-11 03:55:28.359804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:47.108 [2024-06-11 03:55:28.359810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:47.108 [2024-06-11 03:55:28.359983] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:47.108 [2024-06-11 03:55:28.360162] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:47.108 [2024-06-11 03:55:28.360171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:47.108 [2024-06-11 03:55:28.360177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:47.108 [2024-06-11 03:55:28.362920] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:47.108 [2024-06-11 03:55:28.369411] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:58:47.108 [2024-06-11 03:55:28.372483] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:47.108 [2024-06-11 03:55:28.372801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:47.108 [2024-06-11 03:55:28.372817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:47.108 [2024-06-11 03:55:28.372824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:47.108 [2024-06-11 03:55:28.372995] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:47.108 [2024-06-11 03:55:28.373172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:47.108 [2024-06-11 03:55:28.373181] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:47.108 [2024-06-11 03:55:28.373187] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:47.108 [2024-06-11 03:55:28.375936] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:47.108 [2024-06-11 03:55:28.385498] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:47.108 [2024-06-11 03:55:28.385954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:47.108 [2024-06-11 03:55:28.385970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:47.108 [2024-06-11 03:55:28.385977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:47.108 [2024-06-11 03:55:28.386153] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:47.108 [2024-06-11 03:55:28.386325] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:47.108 [2024-06-11 03:55:28.386333] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:47.108 [2024-06-11 03:55:28.386339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:47.108 [2024-06-11 03:55:28.389088] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:47.108 [2024-06-11 03:55:28.398480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:47.108 [2024-06-11 03:55:28.398916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:47.108 [2024-06-11 03:55:28.398931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:47.108 [2024-06-11 03:55:28.398938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:47.108 [2024-06-11 03:55:28.399115] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:47.108 [2024-06-11 03:55:28.399287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:47.108 [2024-06-11 03:55:28.399295] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:47.108 [2024-06-11 03:55:28.399302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:47.108 [2024-06-11 03:55:28.402051] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:47.108 [2024-06-11 03:55:28.411462] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:47.108 [2024-06-11 03:55:28.411863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:47.108 [2024-06-11 03:55:28.411881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:47.108 [2024-06-11 03:55:28.411888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:47.108 [2024-06-11 03:55:28.412069] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:47.108 [2024-06-11 03:55:28.412240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:47.108 [2024-06-11 03:55:28.412248] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:47.108 [2024-06-11 03:55:28.412255] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:47.108 Malloc0 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:47.108 [2024-06-11 03:55:28.414997] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:47.108 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:47.108 [2024-06-11 03:55:28.424562] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:47.108 [2024-06-11 03:55:28.424927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:47.108 [2024-06-11 03:55:28.424943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:47.108 [2024-06-11 03:55:28.424950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:47.108 [2024-06-11 03:55:28.425124] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:47.108 [2024-06-11 03:55:28.425296] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:47.109 [2024-06-11 03:55:28.425304] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:47.109 [2024-06-11 03:55:28.425310] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:47.109 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:47.109 03:55:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:58:47.109 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:47.109 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:47.109 [2024-06-11 03:55:28.428067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:47.109 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:47.109 03:55:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:58:47.109 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:47.109 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:47.109 [2024-06-11 03:55:28.437627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:47.109 [2024-06-11 03:55:28.437952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:58:47.109 [2024-06-11 03:55:28.437968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1318770 with addr=10.0.0.2, port=4420 00:58:47.109 [2024-06-11 03:55:28.437975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1318770 is same with the state(5) to be set 00:58:47.109 [2024-06-11 03:55:28.438152] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1318770 (9): Bad file descriptor 00:58:47.109 [2024-06-11 03:55:28.438211] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:58:47.109 [2024-06-11 03:55:28.438324] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:58:47.109 [2024-06-11 03:55:28.438333] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:58:47.109 [2024-06-11 03:55:28.438339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:58:47.109 [2024-06-11 03:55:28.441084] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:58:47.109 03:55:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:47.109 03:55:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2406180 00:58:47.109 [2024-06-11 03:55:28.450678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:58:47.368 [2024-06-11 03:55:28.641267] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:58:57.340 00:58:57.340 Latency(us) 00:58:57.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:58:57.340 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:58:57.340 Verification LBA range: start 0x0 length 0x4000 00:58:57.340 Nvme1n1 : 15.01 8709.99 34.02 11454.52 0.00 6328.47 592.94 20347.37 00:58:57.340 =================================================================================================================== 00:58:57.340 Total : 8709.99 34.02 11454.52 0.00 6328.47 592.94 20347.37 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:58:57.340 rmmod nvme_tcp 00:58:57.340 rmmod nvme_fabrics 00:58:57.340 rmmod nvme_keyring 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2407122 ']' 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2407122 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@949 -- # '[' -z 2407122 ']' 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # kill -0 2407122 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # uname 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2407122 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2407122' 00:58:57.340 killing process with pid 2407122 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@968 -- # kill 2407122 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@973 -- # wait 2407122 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:58:57.340 03:55:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:58.715 03:55:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:58:58.715 00:58:58.715 real 0m25.906s 00:58:58.715 user 1m0.261s 00:58:58.715 sys 0m6.687s 00:58:58.715 03:55:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:58:58.715 03:55:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:58:58.715 ************************************ 00:58:58.715 END TEST nvmf_bdevperf 00:58:58.715 ************************************ 00:58:58.715 03:55:40 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:58:58.715 03:55:40 nvmf_tcp -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:58:58.715 03:55:40 nvmf_tcp -- common/autotest_common.sh@1106 -- # xtrace_disable 00:58:58.715 03:55:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:58:58.715 ************************************ 00:58:58.715 START TEST nvmf_target_disconnect 00:58:58.715 ************************************ 00:58:58.715 03:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:58:58.974 * Looking for test storage... 00:58:58.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:58:58.974 03:55:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:58:58.975 03:55:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:59:05.568 Found 0000:86:00.0 (0x8086 - 0x159b) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:59:05.568 Found 0000:86:00.1 (0x8086 - 0x159b) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:59:05.568 Found net devices under 0000:86:00.0: cvl_0_0 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:59:05.568 Found net devices under 0000:86:00.1: cvl_0_1 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:59:05.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:59:05.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:59:05.568 00:59:05.568 --- 10.0.0.2 ping statistics --- 00:59:05.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:05.568 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:59:05.568 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:59:05.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:59:05.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:59:05.568 00:59:05.569 --- 10.0.0.1 ping statistics --- 00:59:05.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:05.569 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:59:05.569 ************************************ 00:59:05.569 START TEST nvmf_target_disconnect_tc1 00:59:05.569 ************************************ 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc1 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:59:05.569 EAL: No free 2048 kB hugepages reported on node 1 00:59:05.569 [2024-06-11 03:55:46.667823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:05.569 [2024-06-11 03:55:46.667919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2340dc0 with addr=10.0.0.2, port=4420 00:59:05.569 [2024-06-11 03:55:46.667971] nvme_tcp.c:2706:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:59:05.569 [2024-06-11 03:55:46.668008] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:59:05.569 [2024-06-11 03:55:46.668047] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:59:05.569 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:59:05.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:59:05.569 Initializing NVMe Controllers 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:59:05.569 00:59:05.569 real 0m0.088s 00:59:05.569 user 0m0.033s 00:59:05.569 sys 0m0.055s 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:59:05.569 ************************************ 00:59:05.569 END TEST nvmf_target_disconnect_tc1 00:59:05.569 ************************************ 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1106 -- # xtrace_disable 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:59:05.569 ************************************ 00:59:05.569 START TEST nvmf_target_disconnect_tc2 00:59:05.569 ************************************ 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # nvmf_target_disconnect_tc2 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2412547 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2412547 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2412547 ']' 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:05.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:59:05.569 03:55:46 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:05.569 [2024-06-11 03:55:46.802036] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:59:05.569 [2024-06-11 03:55:46.802073] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:59:05.569 EAL: No free 2048 kB hugepages reported on node 1 00:59:05.569 [2024-06-11 03:55:46.874460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:59:05.569 [2024-06-11 03:55:46.915609] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:59:05.569 [2024-06-11 03:55:46.915649] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:59:05.569 [2024-06-11 03:55:46.915657] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:59:05.569 [2024-06-11 03:55:46.915663] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:59:05.569 [2024-06-11 03:55:46.915668] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:59:05.569 [2024-06-11 03:55:46.915795] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:59:05.569 [2024-06-11 03:55:46.915886] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:59:05.569 [2024-06-11 03:55:46.915992] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:59:05.569 [2024-06-11 03:55:46.915993] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:06.503 Malloc0 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:06.503 [2024-06-11 03:55:47.662781] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:06.503 [2024-06-11 03:55:47.691762] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2412792 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:59:06.503 03:55:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:59:06.503 EAL: No free 2048 kB hugepages reported on node 1 00:59:08.410 03:55:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2412547 00:59:08.410 03:55:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:59:08.410 Read completed with error (sct=0, sc=8) 00:59:08.410 starting I/O failed 00:59:08.410 Read completed with error (sct=0, sc=8) 00:59:08.410 starting I/O failed 00:59:08.410 Read completed with error (sct=0, sc=8) 00:59:08.410 starting I/O failed 00:59:08.410 Read completed with error (sct=0, sc=8) 00:59:08.410 starting I/O failed 00:59:08.410 Read completed with error (sct=0, sc=8) 00:59:08.410 starting I/O failed 00:59:08.410 Read completed with error (sct=0, sc=8) 00:59:08.410 starting I/O failed 00:59:08.410 Read completed with error (sct=0, sc=8) 00:59:08.410 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 [2024-06-11 03:55:49.719653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 [2024-06-11 03:55:49.719868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 [2024-06-11 03:55:49.720062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Write completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.411 Read completed with error (sct=0, sc=8) 00:59:08.411 starting I/O failed 00:59:08.412 Write completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Write completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Read completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Write completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Read completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Read completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Write completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Write completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Write completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Read completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Read completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Write completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Write completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Read completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Read completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Read completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Read completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Write completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Read completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Read completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 Read completed with error (sct=0, sc=8) 00:59:08.412 starting I/O failed 00:59:08.412 [2024-06-11 03:55:49.720242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:08.412 [2024-06-11 03:55:49.720519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.720573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.720865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.720897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.721112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.721143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.721362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.721392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.721673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.721703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.721855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.721865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.722041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.722051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.722301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.722311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.722488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.722498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.722625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.722654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.722904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.722933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.723101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.723131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.723424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.723454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.723715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.723744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.724053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.724087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.724320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.724353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.724512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.724541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.724856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.724885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.725161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.725171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.725349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.725358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.725562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.725593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.725891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.725921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.726139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.726170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.726387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.726396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.726586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.726615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.412 qpair failed and we were unable to recover it. 00:59:08.412 [2024-06-11 03:55:49.726831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.412 [2024-06-11 03:55:49.726861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.727163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.727194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.727442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.727487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.727781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.727820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.727941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.727951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.728062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.728072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.728295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.728305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.728502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.728511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.728768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.728777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.728957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.728987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.729218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.729249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.729546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.729575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.729868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.729898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.730200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.730231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.730470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.730499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.730781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.730831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.731061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.731071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.731233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.731242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.731409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.731418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.731645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.731655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.731839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.731849] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.732047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.732058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.732302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.732312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.732479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.732488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.732701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.732711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.732879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.732888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.733158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.733168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.733363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.733392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.733564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.733592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.733801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.733832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.734053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.734084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.734309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.734338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.734544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.734574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.734811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.734841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.735102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.413 [2024-06-11 03:55:49.735112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.413 qpair failed and we were unable to recover it. 00:59:08.413 [2024-06-11 03:55:49.735270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.735280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.735409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.735439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.735763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.735792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.736018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.736028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.736260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.736290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.736579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.736608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.736894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.736924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.737223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.737254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.737413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.737443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.737755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.737784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.737953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.737963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.738123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.738161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.738451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.738480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.738755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.738784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.739103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.739133] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.739374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.739404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.739646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.739675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.739884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.739914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.740199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.740209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.740437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.740447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.740637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.740649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.740895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.740905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.741153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.741162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.741332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.741341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.741601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.741631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.741843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.741872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.742067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.742077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.414 qpair failed and we were unable to recover it. 00:59:08.414 [2024-06-11 03:55:49.742307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.414 [2024-06-11 03:55:49.742337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.742563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.742572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.742736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.742765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.743074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.743105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.743266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.743296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.743586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.743616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.743848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.743877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.744096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.744106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.744288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.744318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.744586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.744616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.744931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.744961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.745187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.745219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.745530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.745560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.745845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.745874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.746169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.746200] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.746495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.746525] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.746742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.746772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.746984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.415 [2024-06-11 03:55:49.747022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.415 qpair failed and we were unable to recover it. 00:59:08.415 [2024-06-11 03:55:49.747319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.747348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.747566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.747596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.747898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.747928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.748221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.748252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.748542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.748572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.748787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.748796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.749028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.749059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.749211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.749241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.749508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.749537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.749816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.749845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.750181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.750190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.750425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.750434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.750616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.750625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.750831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.750861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.751109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.751140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.751402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.751437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.751723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.751753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.752055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.752064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.752313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.752322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.752550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.752559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.752661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.752670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.752916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.752926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.753054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.753064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.416 qpair failed and we were unable to recover it. 00:59:08.416 [2024-06-11 03:55:49.753269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.416 [2024-06-11 03:55:49.753299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.753575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.753604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.753878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.753908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.754175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.754207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.754517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.754547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.754821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.754831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.755095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.755105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.755264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.755273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.755527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.755556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.755863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.755893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.756187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.756196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.756451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.756460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.756630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.756640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.756743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.756753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.757053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.757085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.757374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.757404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.757672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.757702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.757905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.757935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.758211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.758243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.758451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.758481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.758682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.758711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.758980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.759033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.759251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.759281] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.759572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.759602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.759869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.759899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.760212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.760243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.760534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.760564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.760870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.760900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.761218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.761249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.761525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.761555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.761871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.761901] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.762183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.762215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.762439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.762474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.417 qpair failed and we were unable to recover it. 00:59:08.417 [2024-06-11 03:55:49.762729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.417 [2024-06-11 03:55:49.762758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.763027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.763058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.763300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.763330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.763619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.763649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.763926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.763955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.764261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.764291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.764491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.764521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.764818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.764847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.765149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.765180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.765475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.765505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.765799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.765829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.766096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.766105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.766326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.766335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.766541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.766551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.766772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.766781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.766948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.766957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.767157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.767187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.767343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.767372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.767512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.767542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.767771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.767780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.768053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.768085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.768325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.768355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.768644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.768673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.768921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.768952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.769253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.769263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.769431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.769440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.769686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.769696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.769863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.769872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.770140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.770172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.770375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.770405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.770623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.770653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.770916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.770925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.771038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.771048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.771300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.771331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.418 [2024-06-11 03:55:49.771601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.418 [2024-06-11 03:55:49.771630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.418 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.771836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.771866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.772078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.772089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.772261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.772291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.772516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.772545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.772824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.772853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.773175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.773207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.773479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.773509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.773820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.773829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.773999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.774015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.774137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.774146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.774375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.774385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.774485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.774495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.774649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.774658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.774910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.774919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.775144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.775176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.775470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.775500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.775739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.775769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.776038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.776048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.776274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.776284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.776378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.776388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.776494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.776503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.776740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.776750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.777026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.777057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.777346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.777375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.777663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.777693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.777991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.778031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.778317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.778327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.778573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.778583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.778831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.778840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.779031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.779041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.779229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.779238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.779430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.419 [2024-06-11 03:55:49.779465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.419 qpair failed and we were unable to recover it. 00:59:08.419 [2024-06-11 03:55:49.779758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.779788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.780080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.780090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.780266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.780276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.780460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.780469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.780698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.780728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.780965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.780994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.781295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.781325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.781622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.781652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.781900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.781931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.782207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.782217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.782379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.782390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.782572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.782602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.782825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.782854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.783100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.783110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.783344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.783375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.783693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.783723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.784004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.784043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.784276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.784305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.784596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.784626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.784837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.784868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.785159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.785169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.785382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.785411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.785701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.785731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.786029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.786060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.786357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.786387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.786677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.786707] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.787019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.787051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.787282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.787312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.787572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.787603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.787875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.787885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.788087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.788097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.788269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.788278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.788465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.788495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.788776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.788806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.789077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.789108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.789423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.789452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.420 [2024-06-11 03:55:49.789733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.420 [2024-06-11 03:55:49.789764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.420 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.789971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.789980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.790085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.790095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.790226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.790239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.790556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.790586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.790832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.790862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.791088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.791098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.791367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.791397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.791684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.791714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.791976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.791986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.792156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.792166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.792431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.792462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.792665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.792695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.792907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.792937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.793170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.793201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.793474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.793506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.793752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.793782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.794081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.794092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.794220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.794229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.794430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.794439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.794631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.794641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.794821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.794850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.795071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.795101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.795374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.795404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.795566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.795595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.795894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.795925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.796245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.796277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.796501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.796532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.796846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.796877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.797089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.797120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.797291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.797301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.797533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.797562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.797725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.421 [2024-06-11 03:55:49.797761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.421 qpair failed and we were unable to recover it. 00:59:08.421 [2024-06-11 03:55:49.797988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.422 [2024-06-11 03:55:49.797997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.422 qpair failed and we were unable to recover it. 00:59:08.422 [2024-06-11 03:55:49.798229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.422 [2024-06-11 03:55:49.798240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.422 qpair failed and we were unable to recover it. 00:59:08.422 [2024-06-11 03:55:49.798404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.422 [2024-06-11 03:55:49.798414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.422 qpair failed and we were unable to recover it. 00:59:08.422 [2024-06-11 03:55:49.798581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.422 [2024-06-11 03:55:49.798591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.422 qpair failed and we were unable to recover it. 00:59:08.422 [2024-06-11 03:55:49.798794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.422 [2024-06-11 03:55:49.798824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.422 qpair failed and we were unable to recover it. 00:59:08.422 [2024-06-11 03:55:49.799124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.422 [2024-06-11 03:55:49.799156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.422 qpair failed and we were unable to recover it. 00:59:08.422 [2024-06-11 03:55:49.799376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.422 [2024-06-11 03:55:49.799406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.422 qpair failed and we were unable to recover it. 00:59:08.422 [2024-06-11 03:55:49.799640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.422 [2024-06-11 03:55:49.799669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.422 qpair failed and we were unable to recover it. 00:59:08.422 [2024-06-11 03:55:49.799936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.422 [2024-06-11 03:55:49.799966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.422 qpair failed and we were unable to recover it. 00:59:08.422 [2024-06-11 03:55:49.800294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.422 [2024-06-11 03:55:49.800304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.422 qpair failed and we were unable to recover it. 00:59:08.422 [2024-06-11 03:55:49.800488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.422 [2024-06-11 03:55:49.800524] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.422 qpair failed and we were unable to recover it. 00:59:08.422 [2024-06-11 03:55:49.800846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.422 [2024-06-11 03:55:49.800876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.422 qpair failed and we were unable to recover it. 00:59:08.422 [2024-06-11 03:55:49.801144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.422 [2024-06-11 03:55:49.801154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.422 qpair failed and we were unable to recover it. 00:59:08.422 [2024-06-11 03:55:49.801319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.422 [2024-06-11 03:55:49.801328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.422 qpair failed and we were unable to recover it. 00:59:08.422 [2024-06-11 03:55:49.801505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.422 [2024-06-11 03:55:49.801515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.422 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.801798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.801828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.802032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.802063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.802332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.802362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.802574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.802604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.802870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.802879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.803066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.803076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.803374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.803404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.803703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.803732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.804031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.804041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.804239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.804249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.804424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.804433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.804604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.804614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.804870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.804879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.805109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.805119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.805239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.805249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.805353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.805363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.805640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.805650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.805895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.805905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.806077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.806087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.806341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.806373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.806641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.806672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.806995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.807035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.807325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.807336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.807534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.807564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.807853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.807883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.808138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.808148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.808405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.808436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.808639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.808669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.808887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.808916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.423 [2024-06-11 03:55:49.809177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.423 [2024-06-11 03:55:49.809187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.423 qpair failed and we were unable to recover it. 00:59:08.699 [2024-06-11 03:55:49.809438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.699 [2024-06-11 03:55:49.809448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.699 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.809573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.809582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.809855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.809865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.810094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.810105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.810356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.810366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.810595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.810607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.810777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.810787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.810958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.810968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.811242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.811253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.811421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.811431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.811596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.811606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.811783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.811792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.812090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.812122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.812343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.812371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.812670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.812699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.812992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.813034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.813261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.813291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.813564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.813594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.813887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.813920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.814057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.814067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.814194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.814203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.814430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.814459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.814680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.814709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.814986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.814995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.815191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.815201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.815322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.815333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.815512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.815522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.815726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.815755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.816024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.816056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.816329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.816359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.816628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.816660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.816888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.816918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.817168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.817201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.817492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.817503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.817677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.817687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.817914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.817924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.818101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.818133] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.818430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.818460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.700 qpair failed and we were unable to recover it. 00:59:08.700 [2024-06-11 03:55:49.818687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.700 [2024-06-11 03:55:49.818717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.818939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.818969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.819217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.819227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.819459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.819489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.819738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.819768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.819964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.819974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.820092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.820102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.820300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.820313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.820489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.820498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.820674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.820701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.820945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.820977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.821264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.821305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.821401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.821411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.821568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.821579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.821857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.821887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.822139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.822171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.822377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.822386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.822571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.822600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.822840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.822870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.823096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.823127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.823296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.823326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.823642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.823671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.823929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.823939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.824101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.824112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.824393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.824423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.824650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.824680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.824970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.824999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.825226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.825236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.825445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.825474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.825691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.825721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.825994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.826034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.826250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.826260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.826447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.826477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.826698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.826731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.827038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.827069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.827320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.827331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.827559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.827569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.827747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.827760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.701 [2024-06-11 03:55:49.827935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.701 [2024-06-11 03:55:49.827945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.701 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.828170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.828180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.828395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.828405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.828579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.828589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.828827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.828837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.828973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.828982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.829191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.829202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.829425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.829435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.829545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.829555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.829748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.829761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.830029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.830039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.830197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.830207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.830464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.830475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.830604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.830613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.830834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.830844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.831106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.831117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.831345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.831355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.831550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.831579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.831728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.831758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.832048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.832080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.832261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.832292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.832589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.832619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.832823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.832853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.833104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.833136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.833305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.833335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.833488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.833518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.833758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.833788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.834082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.834112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.834264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.834273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.834399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.834408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.834541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.834552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.834822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.834832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.834953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.834963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.835081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.835092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.835362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.835372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.835565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.835595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.835862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.835892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.836115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.836145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.836432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.702 [2024-06-11 03:55:49.836441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.702 qpair failed and we were unable to recover it. 00:59:08.702 [2024-06-11 03:55:49.836622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.836631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.836807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.836816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.836996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.837054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.837331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.837361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.837510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.837540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.837787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.837817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.838042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.838052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.838177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.838207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.838412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.838445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.838667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.838698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.838968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.838984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.839137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.839148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.839336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.839366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.839502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.839531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.839868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.839898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.840152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.840183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.840352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.840382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.840548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.840578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.840896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.840905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.841019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.841029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.841293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.841323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.841486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.841516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.841750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.841782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.842029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.842058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.842233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.842263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.842469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.842480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.842762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.842773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.842961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.842991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.843194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.843225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.843476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.843506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.843736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.843766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.844017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.844028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.844266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.844276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.844456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.844466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.844674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.844704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.844926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.844955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.845232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.845243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.845497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.703 [2024-06-11 03:55:49.845539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:08.703 qpair failed and we were unable to recover it. 00:59:08.703 [2024-06-11 03:55:49.845876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.845947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.846195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.846230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.846415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.846446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.846722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.846752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.847003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.847044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.847371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.847401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.847578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.847608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.847914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.847945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.848236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.848270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.848438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.848467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.848771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.848801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.849056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.849088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.849316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.849365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.849659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.849690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.849991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.850032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.850268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.850283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.850532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.850562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.850800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.850830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.851128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.851144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.851340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.851354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.851624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.851653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.851886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.851917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.852134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.852150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.852349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.852379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.852585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.852615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.852893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.852922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.853134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.853151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.853383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.704 [2024-06-11 03:55:49.853413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.704 qpair failed and we were unable to recover it. 00:59:08.704 [2024-06-11 03:55:49.853635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.853665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.853943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.853973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.854319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.854350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.854623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.854653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.854947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.854977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.855218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.855250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.855470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.855485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.855675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.855690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.855876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.855906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.856184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.856199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.856392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.856407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.856680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.856711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.856921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.856951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.857294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.857311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.857511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.857526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.857723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.857753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.857995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.858042] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.858367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.858382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.858632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.858662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.858899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.858929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.859151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.859167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.859382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.859412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.859639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.859668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.859964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.859993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.860275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.860311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.860600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.860631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.860848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.860878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.861165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.861196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.861357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.861372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.861570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.861599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.861806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.861835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.862057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.862087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.862247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.862262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.862506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.862521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.862810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.862840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.863059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.863090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.863392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.863422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.863717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.705 [2024-06-11 03:55:49.863747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.705 qpair failed and we were unable to recover it. 00:59:08.705 [2024-06-11 03:55:49.863970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.864000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.864171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.864202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.864405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.864420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.864667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.864696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.864924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.864955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.865182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.865213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.865535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.865565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.865859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.865901] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.866183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.866199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.866469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.866485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.866613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.866629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.866902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.866932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.867169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.867201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.867479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.867494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.867844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.867859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.868073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.868104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.868317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.868348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.868695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.868727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.868892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.868922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.869186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.869235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.869491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.869506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.869699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.869714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.870015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.870030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.870278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.870293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.870439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.870454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.870557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.870571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.870833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.870869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.871081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.871113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.871282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.871312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.871533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.871566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.871807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.871837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.871994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.872033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.872258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.872296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.872422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.872437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.872650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.872679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.872931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.872960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.873202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.873235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.873449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.873487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.706 qpair failed and we were unable to recover it. 00:59:08.706 [2024-06-11 03:55:49.873836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.706 [2024-06-11 03:55:49.873878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.874126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.874173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.874379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.874401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.874556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.874575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.874856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.874878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.875039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.875057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.875185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.875201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.875391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.875406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.875648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.875663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.875867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.875883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.876058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.876077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.876221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.876236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.876462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.876477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.876725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.876741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.876956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.876973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.877215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.877232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.877478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.877495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.877714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.877728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.877903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.877922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.878041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.878058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.878252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.878269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.878465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.878480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.878707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.878722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.878963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.878978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.879192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.879208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.879407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.879422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.879541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.879556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.879692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.879707] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.879901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.879922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.880138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.880154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.880347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.880363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.880594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.880609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.880730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.880745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.880889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.880903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.881081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.881097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.881274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.881290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.881481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.881496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.881734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.881749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.707 [2024-06-11 03:55:49.881937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.707 [2024-06-11 03:55:49.881952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.707 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.882169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.882184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.882379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.882395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.882516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.882530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.882806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.882821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.883021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.883037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.883228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.883242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.883481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.883496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.883813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.883829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.884074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.884090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.884230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.884246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.884437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.884452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.884644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.884659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.884910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.884925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.885115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.885133] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.885332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.885347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.885520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.885534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.885688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.885704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.885834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.885850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.886042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.886057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.886192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.886208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.886345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.886360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.886493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.886508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.886656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.886672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.886929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.886944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.887065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.887081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.887277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.887292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.887537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.887552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.887724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.887738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.887936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.887951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.888211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.888230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.888369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.888384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.888556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.888571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.888875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.888890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.889080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.889096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.889282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.889313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.889452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.889467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.889737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.889752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.890040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.708 [2024-06-11 03:55:49.890060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.708 qpair failed and we were unable to recover it. 00:59:08.708 [2024-06-11 03:55:49.890271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.709 [2024-06-11 03:55:49.890286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.709 qpair failed and we were unable to recover it. 00:59:08.709 [2024-06-11 03:55:49.890407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.709 [2024-06-11 03:55:49.890421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.709 qpair failed and we were unable to recover it. 00:59:08.709 [2024-06-11 03:55:49.890554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.709 [2024-06-11 03:55:49.890568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.709 qpair failed and we were unable to recover it. 00:59:08.709 [2024-06-11 03:55:49.890777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.709 [2024-06-11 03:55:49.890792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.890961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.890976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.891177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.891210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.891423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.891454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.891675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.891705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.891917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.891947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.892177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.892209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.892384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.892399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.892598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.892630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.892913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.892943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.893129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.893161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.893443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.893457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.893641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.893656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.893943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.893959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.894139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.894155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.894437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.894473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.894691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.894722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.894884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.894914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.895184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.895216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.895491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.895521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.895663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.895694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.895966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.895996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.896212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.896244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.896404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.896434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.896651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.896681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.896901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.896932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.897088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.897120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.897366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.897380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.897504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.897520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.897795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.897826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.710 qpair failed and we were unable to recover it. 00:59:08.710 [2024-06-11 03:55:49.898068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.710 [2024-06-11 03:55:49.898101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.898375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.898405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.898632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.898662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.898942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.898973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.899143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.899174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.899348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.899378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.899552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.899582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.899792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.899822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.900168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.900184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.900447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.900477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.900787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.900817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.901108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.901124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.901391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.901424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.901771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.901801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.902030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.902061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.902352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.902394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.902637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.902651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.902894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.902928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.903161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.903192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.903440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.903470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.903761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.903791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.904027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.904058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.904304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.904344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.904544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.904559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.904840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.904854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.905086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.905105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.905242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.905257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.905446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.905477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.905753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.905783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.906056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.906087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.906248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.906264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.906530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.906559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.906926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.906955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.907226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.907241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.907386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.907417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.907652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.907682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.907962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.907991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.908327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.908359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.711 [2024-06-11 03:55:49.908579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.711 [2024-06-11 03:55:49.908594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.711 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.908855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.908870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.909024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.909040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.909229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.909259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.909532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.909562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.909877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.909908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.910121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.910136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.910347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.910362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.910493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.910508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.910711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.910740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.911052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.911083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.911303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.911318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.911511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.911526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.911734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.911749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.911927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.911942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.912122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.912137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.912335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.912350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.912546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.912577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.912728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.912759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.913054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.913086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.913268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.913298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.913527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.913558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.913776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.913806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.914079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.914110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.914329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.914344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.914522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.914553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.914770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.914801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.915029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.915066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.915295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.915310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.915507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.915537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.915712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.915742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.915977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.916031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.916248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.916264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.916505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.916521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.916815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.916829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.917072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.917087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.917333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.917347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.917541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.917557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.712 qpair failed and we were unable to recover it. 00:59:08.712 [2024-06-11 03:55:49.917683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.712 [2024-06-11 03:55:49.917698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.917955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.917985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.918316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.918347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.918569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.918599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.918965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.918996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.919307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.919337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.919514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.919544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.919700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.919730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.920030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.920061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.920287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.920317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.920541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.920571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.920801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.920831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.921105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.921136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.921290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.921320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.921523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.921539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.921808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.921838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.922032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.922063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.922287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.922302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.922450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.922481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.922710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.922741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.923048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.923085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.923278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.923293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.923517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.923532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.923744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.923759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.923942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.923956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.924135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.924167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.924340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.924370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.924526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.924556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.924713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.924744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.925043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.925080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.925286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.925301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.925417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.925449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.925625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.925655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.925902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.925932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.926148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.926164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.926299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.926315] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.926559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.926575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.926699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.926714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.713 [2024-06-11 03:55:49.926919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.713 [2024-06-11 03:55:49.926949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.713 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.927289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.927320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.927514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.927545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.927837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.927867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.928146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.928161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.928362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.928393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.928561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.928592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.928820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.928849] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.929173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.929205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.929484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.929515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.929734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.929764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.930061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.930093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.930275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.930291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.930512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.930543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.930876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.930906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.931191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.931222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.931475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.931505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.931843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.931873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.932147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.932179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.932476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.932513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.932829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.932844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.933151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.933167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.933307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.933322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.933526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.933541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.933672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.933687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.933873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.933888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.934179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.934195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.934391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.934406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.934535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.934550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.934741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.934771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.934985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.935024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.935302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.935338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.935561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.714 [2024-06-11 03:55:49.935591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.714 qpair failed and we were unable to recover it. 00:59:08.714 [2024-06-11 03:55:49.935869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.935899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.936122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.936153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.936325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.936340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.936611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.936641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.936848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.936879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.937122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.937153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.937363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.937393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.937662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.937678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.937859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.937874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.938084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.938100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.938364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.938379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.938579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.938609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.938832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.938863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.939166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.939198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.939424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.939454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.939726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.939756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.939964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.939995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.940314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.940329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.940633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.940664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.940825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.940855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.941064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.941095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.941328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.941359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.941586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.941616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.941848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.941877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.942092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.942124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.942356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.942371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.942550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.942580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.942897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.942927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.943132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.943164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.943425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.943455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.943777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.943810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.943968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.943998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.944310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.944340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.944567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.944582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.944841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.944871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.945142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.945174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.945422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.945438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.945684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.945699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.715 [2024-06-11 03:55:49.945913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.715 [2024-06-11 03:55:49.945931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.715 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.946217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.946233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.946433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.946448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.946705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.946721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.946994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.947014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.947258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.947274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.947466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.947496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.947736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.947766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.947996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.948035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.948315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.948345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.948585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.948616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.948909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.948939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.949255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.949271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.949543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.949558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.949713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.949728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.950051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.950067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.950267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.950283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.950478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.950493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.950755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.950769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.951019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.951036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.951326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.951342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.951586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.951601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.951745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.951775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.952056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.952087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.952363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.952393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.952643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.952674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.952893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.952922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.953209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.953241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.953416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.953432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.953630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.953645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.953830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.953865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.954162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.954194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.954498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.954528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.954826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.954856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.955133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.955164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.955382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.955414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.955707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.955722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.955991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.956006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.956207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.716 [2024-06-11 03:55:49.956223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.716 qpair failed and we were unable to recover it. 00:59:08.716 [2024-06-11 03:55:49.956421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.956451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.956658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.956694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.956962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.956992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.957298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.957314] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.957471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.957487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.957768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.957798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.958049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.958081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.958276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.958291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.958438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.958453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.958760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.958775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.959005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.959046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.959295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.959326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.959554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.959584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.959810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.959840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.960144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.960175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.960395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.960426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.960644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.960660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.960908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.960923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.961109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.961140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.961373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.961403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.961652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.961682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.961890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.961920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.962147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.962179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.962464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.962480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.962784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.962815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.963079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.963110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.963290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.963320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.963500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.963515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.963780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.963812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.964119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.964150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.964463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.964496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.964868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.964898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.965155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.965187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.965417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.965432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.965561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.965576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.965862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.965893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.966138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.966171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.966396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.717 [2024-06-11 03:55:49.966426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.717 qpair failed and we were unable to recover it. 00:59:08.717 [2024-06-11 03:55:49.966731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.966761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.966979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.967021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.967200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.967230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.967447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.967466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.967664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.967695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.967925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.967955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.968269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.968300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.968528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.968558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.968859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.968889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.969174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.969205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.969384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.969414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.969663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.969679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.969926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.969941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.970138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.970155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.970413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.970444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.970690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.970720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.971000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.971043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.971301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.971331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.971571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.971586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.971728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.971743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.971957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.971972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.972222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.972238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.972416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.972432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.972586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.972601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.972813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.972828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.973108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.973123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.973326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.973342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.973546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.973561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.973879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.973894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.974167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.974184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.974460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.974475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.974792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.974808] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.975143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.975159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.975312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.975327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.975504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.975519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.975816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.975831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.975972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.975987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.976254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.976271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.718 [2024-06-11 03:55:49.976453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.718 [2024-06-11 03:55:49.976468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.718 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.976617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.976632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.976904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.976920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.977038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.977055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.977249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.977264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.977519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.977555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.977887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.977918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.978223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.978240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.978377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.978393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.978597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.978612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.978817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.978832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.979058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.979074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.979270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.979285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.979506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.979550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.979806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.979837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.980105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.980137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.980367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.980398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.980689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.980704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.980947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.980962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.981235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.981251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.981471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.981487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.981645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.981660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.981853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.981868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.982066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.982082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.982375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.982390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.982514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.982529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.982742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.982758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.983006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.983030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.983225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.983240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.983382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.719 [2024-06-11 03:55:49.983397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.719 qpair failed and we were unable to recover it. 00:59:08.719 [2024-06-11 03:55:49.983589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.983604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.983888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.983903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.984198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.984214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.984505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.984521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.984887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.984902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.985164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.985180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.985376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.985391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.985631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.985646] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.985865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.985880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.986177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.986193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.986412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.986427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.986686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.986702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.986895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.986910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.987153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.987169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.987388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.987404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.987588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.987607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.987885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.987901] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.988145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.988161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.988399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.988414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.988689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.988703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.988907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.988922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.989106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.989121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.989317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.989333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.989554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.989571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.989862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.989891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.990132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.990165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.990346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.990361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.990504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.990520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.990744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.990759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.990977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.990993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.991268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.991284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.991481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.991496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.991842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.991857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.992022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.992037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.992294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.992310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.992461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.992477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.992684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.992698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.992813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.992827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.720 qpair failed and we were unable to recover it. 00:59:08.720 [2024-06-11 03:55:49.993084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.720 [2024-06-11 03:55:49.993100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.993344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.993359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.993597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.993613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.993907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.993923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.994144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.994160] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.994357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.994372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.994501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.994515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.994654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.994668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.994862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.994878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.995052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.995068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.995261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.995276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.995399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.995414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.995703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.995719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.995985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.996001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.996231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.996247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.996382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.996397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.996530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.996545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.996835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.996854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.997175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.997190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.997431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.997446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.997634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.997649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.997960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.997975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.998184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.998201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.998455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.998470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.998765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.998780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.998957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.998972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.999171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.999186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.999333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.999347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.999482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.999497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.999672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.999687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:49.999873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:49.999888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:50.000074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:50.000090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:50.000286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:50.000317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:50.000616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:50.000646] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:50.001304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:50.001352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:50.001625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:50.001657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:50.001808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:50.001824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:50.002062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:50.002080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.721 [2024-06-11 03:55:50.002246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.721 [2024-06-11 03:55:50.002262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.721 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.002554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.002569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.002779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.002794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.003040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.003056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.003254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.003269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.003471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.003485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.003707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.003722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.003956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.003972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.004122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.004138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.004334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.004350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.004539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.004555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.004788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.004803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.004920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.004935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.005120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.005136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.005288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.005303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.005440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.005455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.005677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.005693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.005904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.005919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.006182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.006198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.006332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.006351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.006539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.006554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.006824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.006840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.007028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.007044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.007250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.007265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.007500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.007515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.007820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.007835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.008105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.008121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.008311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.008326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.008471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.008486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.008617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.008632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.008824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.008839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.009022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.009038] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.009236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.009251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.009373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.009388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.009516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.009531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.009816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.009832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.010020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.010036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.010170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.010185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.010324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.722 [2024-06-11 03:55:50.010339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.722 qpair failed and we were unable to recover it. 00:59:08.722 [2024-06-11 03:55:50.010571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.010587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.010841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.010856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.011033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.011049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.011189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.011203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.011336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.011352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.011467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.011482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.011710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.011726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.011922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.011937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.012143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.012159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.012307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.012323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.012463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.012478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.012716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.012731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.012922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.012938] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.013068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.013083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.013286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.013300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.013434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.013450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.013645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.013660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.013834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.013849] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.014029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.014044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.014234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.014250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.014429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.014447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.014572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.014586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.014829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.014845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.015105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.015121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.015353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.015368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.015620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.015635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.015773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.015787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.015976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.015992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.016136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.016151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.016265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.016280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.016409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.016423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.723 qpair failed and we were unable to recover it. 00:59:08.723 [2024-06-11 03:55:50.016707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.723 [2024-06-11 03:55:50.016722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.016948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.016963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.017253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.017269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.017474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.017489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.017772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.017787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.018034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.018050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.018251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.018266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.018447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.018461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.018656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.018670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.018894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.018910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.019126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.019156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.019308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.019323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.019545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.019561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.019747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.019761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.019956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.019971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.020216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.020232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.020387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.020402] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.020648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.020663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.020923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.020938] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.021161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.021176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.021453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.021468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.021593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.021607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.021834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.021849] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.022045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.022060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.022194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.022209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.022452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.022467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.022584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.022598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.022791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.022806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.023078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.023095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.023230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.023245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.023397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.023412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.023622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.023637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.023876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.023890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.024086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.024101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.024243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.024260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.024471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.024488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.024840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.024855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.025079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.025095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.724 [2024-06-11 03:55:50.025292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.724 [2024-06-11 03:55:50.025307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.724 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.025495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.025511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.025641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.025655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.025888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.025907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.026148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.026181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.026374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.026393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.026604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.026639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.026832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.026850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.027048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.027110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.027338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.027432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.027757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.027796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.027972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.028067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.028344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.028392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.028626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.028664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.028927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.028960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.029118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.029163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.029345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.029417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.029567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb70e30 is same with the state(5) to be set 00:59:08.725 [2024-06-11 03:55:50.029814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.029851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.030059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.030077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.030200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.030216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.030404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.030420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.030553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.030568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.030849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.030865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.030993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.031015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.031149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.031165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.031287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.031303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.031522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.031537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.032571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.032603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.032910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.032927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.033161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.033177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.033371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.033386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.033540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.033573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.033874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.033895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.034131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.034160] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.034424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.034437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.034618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.034629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.034809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.034820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.725 [2024-06-11 03:55:50.035083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.725 [2024-06-11 03:55:50.035094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.725 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.035234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.035245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.035358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.035369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.035552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.035563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.035846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.035856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.036001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.036018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.036194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.036205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.036374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.036388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.036563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.036585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.036790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.036801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.036980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.036991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.037194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.037205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.037390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.037400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.037595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.037606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.037713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.037723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.037855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.037865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.038148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.038158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.038321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.038331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.038436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.038446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.038568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.038578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.038766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.038776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.039016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.039028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.039167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.039190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.039441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.039452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.039688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.039698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.039916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.039927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.040212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.040223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.040413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.040424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.040590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.040612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.040783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.040793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.040967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.040978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.041152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.041163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.041389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.041400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.041531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.041541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.041819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.041840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.042111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.042129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.042278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.042294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.042434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.042449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.042705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.726 [2024-06-11 03:55:50.042721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:08.726 qpair failed and we were unable to recover it. 00:59:08.726 [2024-06-11 03:55:50.042986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.043004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.043279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.043295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.043523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.043533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.043843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.043854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.044028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.044039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.044221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.044231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.044458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.044468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.044728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.044738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.044913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.044924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.045181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.045209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.045452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.045463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.045636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.045646] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.045839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.045850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.046129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.046140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.046387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.046397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.046532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.046543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.046785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.046795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.046969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.046980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.047213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.047225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.047455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.047465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.047575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.047584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.047703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.047713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.047887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.047898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.048173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.048185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.048314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.048325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.048549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.048560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.048753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.048764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.048877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.048888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.049082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.049093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.049222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.049233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.049394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.049404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.049577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.049588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.049822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.049833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.050069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.050079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.050205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.050215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.050404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.050418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.050603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.050615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.050710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.050720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.727 qpair failed and we were unable to recover it. 00:59:08.727 [2024-06-11 03:55:50.050978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.727 [2024-06-11 03:55:50.050989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.051200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.051210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.051370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.051381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.051543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.051554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.051729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.051740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.051977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.051989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.052191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.052203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.052463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.052473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.052700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.052710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.052833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.052843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.053073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.053084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.053213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.053236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.053423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.053433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.053552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.053561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.053835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.053845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.054022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.054033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.054207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.054217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.054314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.054324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.054517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.054527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.054709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.054719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.054983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.054993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.055187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.055198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.055377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.055387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.055664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.055673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.055850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.728 [2024-06-11 03:55:50.055860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.728 qpair failed and we were unable to recover it. 00:59:08.728 [2024-06-11 03:55:50.055979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.055989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.056239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.056249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.056378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.056388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.056588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.056598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.056714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.056724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.056896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.056905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.057090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.057101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.057375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.057384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.057562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.057572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.057796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.057805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.058025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.058035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.058267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.058277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.058522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.058534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.058805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.058815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.058922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.058932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.059184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.059194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.059440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.059450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.059724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.059734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.059905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.059915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.060091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.060102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.060287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.060297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.060462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.060474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.060703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.060713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.060901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.060911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.061115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.061125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.061251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.061261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.061499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.061509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.061632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.061643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.061870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.061881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.062055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.062066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.062289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.062299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.062472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.062482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.062610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.062621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.062847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.062858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.063085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.063098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.729 [2024-06-11 03:55:50.063214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.729 [2024-06-11 03:55:50.063224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.729 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.063404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.063415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.063585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.063594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.063767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.063777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.064057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.064067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.064239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.064249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.064346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.064354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.064583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.064593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.064831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.064841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.065068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.065079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.065304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.065314] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.065490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.065501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.065678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.065688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.065914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.065924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.066095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.066105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.066305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.066315] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.066540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.066550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.066729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.066741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.066932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.066944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.067147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.067161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.067355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.067365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.067476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.067486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.067752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.067762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.067937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.067947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.068108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.068119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.068342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.068351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.068466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.068475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.068600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.068610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.068717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.068729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.068897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.068907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.069101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.069111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.069234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.069244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.069453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.069464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.069738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.069748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.069916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.069926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.070111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.070121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.070293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.070302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.070476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.070485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.070769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.730 [2024-06-11 03:55:50.070779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.730 qpair failed and we were unable to recover it. 00:59:08.730 [2024-06-11 03:55:50.070889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.070899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.071006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.071021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.071243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.071254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.071428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.071438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.071594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.071604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.071719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.071729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.072001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.072014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.072291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.072301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.072492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.072502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.072618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.072628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.072880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.072889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.073156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.073166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.073284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.073295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.073536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.073547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.073760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.073770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.073958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.073968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.074145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.074156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.074269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.074279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.074392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.074406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.074511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.074520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.074770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.074780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.075030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.075041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.075215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.075225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.075450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.075460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.075631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.075641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.075886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.075896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.076077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.076087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.076267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.076277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.076537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.076547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.076747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.076757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.076987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.076997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.077264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.077274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.077367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.731 [2024-06-11 03:55:50.077377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.731 qpair failed and we were unable to recover it. 00:59:08.731 [2024-06-11 03:55:50.077476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.077486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.077589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.077600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.077851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.077862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.077966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.077976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.078241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.078255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.078460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.078470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.078645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.078656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.078818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.078828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.079097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.079108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.079228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.079238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.079475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.079486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.079653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.079663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.079857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.079867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.080100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.080111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.080229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.080239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.080417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.080427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.080603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.080613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.080794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.080804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.081001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.081015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.081167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.081178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.081289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.081299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.081587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.081598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.081759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.081769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.082000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.082014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.082109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.082118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.082293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.082307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.082517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.082528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.082651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.082661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.082854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.082864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.083040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.083051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.083220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.083229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.083447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.083457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.083696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.083705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.083959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.083969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.084132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.084143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.084367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.084377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.084510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.084519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.084634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.084643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.084757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.084766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.084864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.084873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.085098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.085107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.085329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.085339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.732 [2024-06-11 03:55:50.085567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.732 [2024-06-11 03:55:50.085576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.732 qpair failed and we were unable to recover it. 00:59:08.733 [2024-06-11 03:55:50.085785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.733 [2024-06-11 03:55:50.085795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.733 qpair failed and we were unable to recover it. 00:59:08.733 [2024-06-11 03:55:50.086020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.733 [2024-06-11 03:55:50.086030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.733 qpair failed and we were unable to recover it. 00:59:08.733 [2024-06-11 03:55:50.086275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.733 [2024-06-11 03:55:50.086286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.733 qpair failed and we were unable to recover it. 00:59:08.733 [2024-06-11 03:55:50.086510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.733 [2024-06-11 03:55:50.086520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.733 qpair failed and we were unable to recover it. 00:59:08.733 [2024-06-11 03:55:50.086698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.733 [2024-06-11 03:55:50.086708] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.733 qpair failed and we were unable to recover it. 00:59:08.733 [2024-06-11 03:55:50.086888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.733 [2024-06-11 03:55:50.086898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.733 qpair failed and we were unable to recover it. 00:59:08.733 [2024-06-11 03:55:50.087030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.733 [2024-06-11 03:55:50.087040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.733 qpair failed and we were unable to recover it. 00:59:08.733 [2024-06-11 03:55:50.087137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:08.733 [2024-06-11 03:55:50.087147] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:08.733 qpair failed and we were unable to recover it. 00:59:09.011 [2024-06-11 03:55:50.087414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.011 [2024-06-11 03:55:50.087423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.011 qpair failed and we were unable to recover it. 00:59:09.011 [2024-06-11 03:55:50.087606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.011 [2024-06-11 03:55:50.087616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.011 qpair failed and we were unable to recover it. 00:59:09.011 [2024-06-11 03:55:50.087861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.011 [2024-06-11 03:55:50.087871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.011 qpair failed and we were unable to recover it. 00:59:09.011 [2024-06-11 03:55:50.087979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.011 [2024-06-11 03:55:50.087989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.011 qpair failed and we were unable to recover it. 00:59:09.011 [2024-06-11 03:55:50.088154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.011 [2024-06-11 03:55:50.088164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.011 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.088387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.088396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.088622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.088632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.088915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.088925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.089099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.089109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.089301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.089311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.089503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.089513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.089704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.089714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.089960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.089970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.090217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.090227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.090450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.090462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.090616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.090626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.090731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.090741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.090930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.090940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.091100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.091111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.091220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.091230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.091399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.091408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.091511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.091520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.091803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.091813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.091937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.091946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.092175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.092185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.092283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.092293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.092411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.092421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.092529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.092538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.092655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.092665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.092823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.092833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.093083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.093093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.093211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.093221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.093386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.093396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.093574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.093584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.093701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.093711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.093978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.093989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.094193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.094204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.094381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.094391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.094582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.094591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.094756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.094765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.094928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.094937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.095110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.095120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.012 qpair failed and we were unable to recover it. 00:59:09.012 [2024-06-11 03:55:50.095361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.012 [2024-06-11 03:55:50.095392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.095614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.095644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.095924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.095934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.096102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.096112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.096349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.096358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.096473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.096482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.096743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.096752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.096921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.096931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.097121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.097132] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.097236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.097245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.097471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.097481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.097649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.097659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.097785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.097797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.098070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.098085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.098208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.098218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.098455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.098465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.098586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.098595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.098695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.098705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.098864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.098873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.099041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.099051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.099243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.099254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.099374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.099384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.099607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.099617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.099811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.099820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.099994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.100004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.100175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.100185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.100361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.100370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.100563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.100593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.100856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.100885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.101136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.101169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.101408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.101439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.101615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.101645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.101852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.101862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.101982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.101992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.102173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.102183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.102345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.102354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.102651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.102661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.102919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.013 [2024-06-11 03:55:50.102929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.013 qpair failed and we were unable to recover it. 00:59:09.013 [2024-06-11 03:55:50.103111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.103122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.103297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.103328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.103481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.103511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.103751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.103781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.104049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.104060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.104257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.104267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.104448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.104458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.104643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.104653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.104874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.104908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.105184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.105221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.105468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.105500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.105799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.105809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.106041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.106051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.106235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.106246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.106358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.106370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.106476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.106486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.106624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.106654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.106924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.106955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.107193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.107228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.107407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.107448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.107735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.107765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.108070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.108112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.108343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.108373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.108595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.108605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.108865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.108899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.109118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.109153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.109324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.109354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.109579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.109589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.109736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.109766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.109923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.109954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.110190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.110224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.110445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.110477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.110787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.110817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.111114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.111127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.111239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.111249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.111447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.111479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.111718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.111748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.112005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.014 [2024-06-11 03:55:50.112056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.014 qpair failed and we were unable to recover it. 00:59:09.014 [2024-06-11 03:55:50.112338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.112375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.112552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.112582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.112800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.112809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.113074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.113085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.113266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.113277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.113403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.113453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.113709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.113740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.113982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.114022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.114299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.114331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.114594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.114605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.114773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.114784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.114964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.114973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.115164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.115175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.115283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.115293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.115502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.115513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.115706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.115716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.115966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.115981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.116205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.116215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.116395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.116427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.116593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.116603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.116802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.116812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.116986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.116996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.117234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.117265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.117425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.117454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.117625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.117655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.117940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.117950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.118107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.118118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.118307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.118316] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.118476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.118512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.118722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.118752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.119058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.119090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.119309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.119338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.119555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.119585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.119792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.119822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.120092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.120102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.120273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.120283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.120519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.120549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.120787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.015 [2024-06-11 03:55:50.120817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.015 qpair failed and we were unable to recover it. 00:59:09.015 [2024-06-11 03:55:50.121125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.121156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.121407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.121437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.121690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.121699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.121922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.121932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.122062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.122071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.122254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.122264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.122449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.122459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.122632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.122641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.122965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.122994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.123316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.123348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.123619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.123629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.123829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.123839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.124127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.124137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.124393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.124423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.124592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.124622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.124914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.124947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.125163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.125174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.125372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.125381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.125586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.125597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.125722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.125731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.126003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.126042] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.126200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.126230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.126396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.126426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.126643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.126652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.126900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.126930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.127226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.127257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.127488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.127517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.127683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.127692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.127926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.127956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.128193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.128224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.016 [2024-06-11 03:55:50.128453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.016 [2024-06-11 03:55:50.128483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.016 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.128786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.128816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.129107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.129138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.129412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.129441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.129758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.129788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.130018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.130048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.130224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.130253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.130546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.130576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.130874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.130884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.131126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.131136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.131267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.131276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.131523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.131553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.131886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.131916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.132199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.132209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.132339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.132348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.132532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.132563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.132822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.132853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.133133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.133165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.133467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.133497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.133722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.133752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.133925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.133955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.134223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.134254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.134427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.134457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.134748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.134778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.135058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.135089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.135305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.135334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.135481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.135510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.135829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.135859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.136131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.136143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.136320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.136331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.136521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.136550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.136730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.136760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.137062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.137093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.137269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.137299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.137564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.137594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.137825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.137855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.138139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.138170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.138396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.138426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.017 [2024-06-11 03:55:50.138639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.017 [2024-06-11 03:55:50.138669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.017 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.138940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.138970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.139137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.139147] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.139260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.139270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.139477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.139486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.139736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.139746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.139948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.139958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.140101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.140111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.140339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.140349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.140533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.140563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.140795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.140826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.141097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.141127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.141351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.141381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.141551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.141580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.141870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.141900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.142171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.142181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.142386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.142396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.142593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.142624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.142917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.142947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.143111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.143141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.143353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.143383] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.143583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.143592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.143776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.143806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.144092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.144122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.144359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.144388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.144618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.144648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.144917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.144947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.145202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.145212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.145461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.145471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.145584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.145594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.145840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.145851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.146023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.146033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.146233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.146263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.146499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.146530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.146768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.146798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.147087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.147097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.147277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.147287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.147418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.147428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.147620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.018 [2024-06-11 03:55:50.147630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.018 qpair failed and we were unable to recover it. 00:59:09.018 [2024-06-11 03:55:50.147807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.147816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.147992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.148045] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.148269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.148298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.148565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.148595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.148857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.148886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.149137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.149148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.149351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.149361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.149615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.149624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.149940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.149970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.150184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.150214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.150492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.150522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.150805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.150815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.150993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.151003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.151276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.151286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.151403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.151412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.151538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.151547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.151739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.151749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.151933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.151942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.152121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.152133] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.152312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.152342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.152480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.152510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.152737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.152767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.153055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.153064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.153309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.153318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.153441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.153450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.153541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.153551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.153669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.153678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.153919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.153948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.154163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.154194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.154340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.154369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.154630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.154660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.154871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.154900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.155103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.155112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.155384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.155393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.155575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.155584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.155783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.155792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.155970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.155980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.156223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.019 [2024-06-11 03:55:50.156233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.019 qpair failed and we were unable to recover it. 00:59:09.019 [2024-06-11 03:55:50.156405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.156435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.156769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.156799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.157057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.157089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.157360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.157390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.157687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.157716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.158014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.158023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.158199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.158237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.158490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.158507] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.158744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.158774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.158924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.158954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.159194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.159226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.159429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.159458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.159778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.159792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.160119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.160152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.160439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.160469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.160680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.160710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.160980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.161018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.161237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.161267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.161441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.161471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.161813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.161842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.162075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.162112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.162272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.162302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.162521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.162550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.162769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.162799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.163073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.163104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.163428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.163457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.163706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.163736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.163961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.163975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.164117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.164149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.164363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.164394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.164543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.164573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.164779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.164794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.165052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.165083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.165261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.165291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.165498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.165540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.165746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.165763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.165966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.020 [2024-06-11 03:55:50.165981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.020 qpair failed and we were unable to recover it. 00:59:09.020 [2024-06-11 03:55:50.166173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.166188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.166381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.166411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.166739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.166769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.167066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.167081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.167270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.167285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.167523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.167553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.167797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.167827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.168143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.168174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.168395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.168425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.168632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.168662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.168895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.168910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.169166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.169198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.169465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.169495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.169808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.169838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.170059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.170090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.170383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.170413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.170629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.170659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.170956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.170996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.171228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.171243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.171436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.171450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.171636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.171651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.171933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.171962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.172261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.172292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.172464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.172493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.172717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.172753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.173022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.173053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.173275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.173304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.173507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.173536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.173768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.173782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.174051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.174082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.174292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.174323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.174645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.174674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.175018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.175049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.175249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.175264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.021 [2024-06-11 03:55:50.175523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.021 [2024-06-11 03:55:50.175537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.021 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.175782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.175812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.175962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.175992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.176214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.176245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.176405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.176436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.176597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.176626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.176906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.176936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.177152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.177183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.177400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.177430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.177713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.177742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.177983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.177998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.178248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.178264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.178525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.178540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.178831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.178862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.179081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.179112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.179353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.179383] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.179557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.179586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.179849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.179892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.180087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.180102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.180243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.180258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.180439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.180469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.180784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.180814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.181045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.181061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.181302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.181332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.181627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.181656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.181898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.181913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.182126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.182141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.182376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.182391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.182591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.182605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.182817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.182832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.183129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.183173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.183398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.183428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.183696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.183726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.184017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.184033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.184155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.184169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.184339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.184372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.184594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.184624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.184861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.184891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.185109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.022 [2024-06-11 03:55:50.185125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.022 qpair failed and we were unable to recover it. 00:59:09.022 [2024-06-11 03:55:50.185361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.185376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.185490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.185504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.185636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.185650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.185905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.185920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.186248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.186264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.186387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.186401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.186674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.186689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.186895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.186910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.187161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.187176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.187439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.187454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.187593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.187608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.187802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.187831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.188046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.188077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.188249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.188279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.188435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.188464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.188619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.188648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.188862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.188892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.189107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.189138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.189303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.189333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.189491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.189526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.189771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.189801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.190046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.190061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.190195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.190209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.190382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.190397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.190535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.190550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.190846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.190876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.191160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.191196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.191422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.191452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.191608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.191637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.191853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.191883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.192179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.192194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.192457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.192472] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.192746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.192760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.193025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.193062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.193271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.193301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.193461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.193491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.193783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.193813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.194036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.194067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.194301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.023 [2024-06-11 03:55:50.194331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.023 qpair failed and we were unable to recover it. 00:59:09.023 [2024-06-11 03:55:50.194490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.194521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.194839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.194869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.195167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.195182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.195458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.195473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.195602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.195616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.195755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.195770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.195960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.195975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.196126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.196141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.196356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.196386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.196593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.196622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.196917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.196947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.197138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.197168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.197388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.197418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.197636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.197666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.197965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.197995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.198216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.198231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.198491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.198506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.198770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.198784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.198915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.198930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.199035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.199050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.199319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.199335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.199557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.199572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.199828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.199842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.199977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.199992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.200153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.200184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.200459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.200489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.200800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.200830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.201052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.201083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.201257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.201286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.201515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.201545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.201867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.201907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.202090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.202105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.202252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.202282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.202514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.202544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.202723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.202753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.203020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.203036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.203171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.203186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.203304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.203318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.024 [2024-06-11 03:55:50.203625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.024 [2024-06-11 03:55:50.203655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.024 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.203810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.203840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.204135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.204151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.204304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.204319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.204561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.204591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.204814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.204844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.205089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.205119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.205323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.205353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.205511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.205541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.205765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.205780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.206019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.206037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.206246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.206261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.206451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.206466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.206604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.206619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.206907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.206937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.207103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.207134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.207431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.207461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.207757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.207787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.207973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.208003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.208232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.208247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.208441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.208471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.208676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.208705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.208920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.208949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.209205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.209236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.209455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.209485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.209689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.209718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.209954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.209984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.210236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.210269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.210513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.210532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.210801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.210820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.211085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.211105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.211261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.211279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.211484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.025 [2024-06-11 03:55:50.211502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.025 qpair failed and we were unable to recover it. 00:59:09.025 [2024-06-11 03:55:50.211667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.211705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.212045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.212084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.212355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.212374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.212659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.212696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.212956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.213001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.213240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.213258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.213403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.213421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.213590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.213628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.213833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.213869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.214161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.214202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.214390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.214404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.214599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.214613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.214856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.214871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.215073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.215088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.215221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.215235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.215369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.215384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.215517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.215532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.215795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.215825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.216058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.216090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.216325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.216355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.216663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.216692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.216978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.216992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.217260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.217275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.217398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.217413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.217677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.217706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.217924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.217953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.218193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.218209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.218411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.218441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.218691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.218721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.218927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.218957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.219262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.219293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.219467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.219496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.219737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.219766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.220085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.220102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.220273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.220288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.220472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.220486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.220616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.220645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.220938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.220967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.221196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.026 [2024-06-11 03:55:50.221227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.026 qpair failed and we were unable to recover it. 00:59:09.026 [2024-06-11 03:55:50.221391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.221422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.221632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.221662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.221882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.221898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.222073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.222089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.222211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.222225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.222418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.222433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.222558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.222573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.222776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.222791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.223059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.223074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.223197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.223213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.223407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.223422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.223695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.223710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.223975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.223990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.224264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.224279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.224451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.224466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.224747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.224762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.225002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.225023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.225212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.225226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.225429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.225443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.225614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.225629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.225814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.225830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.226022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.226038] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.226159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.226174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.226352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.226366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.226505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.226520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.226804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.226819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.226966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.226980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.227210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.227225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.227465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.227480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.227726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.227741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.227878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.227893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.228124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.228140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.228289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.228303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.228506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.228523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.228805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.228819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.229002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.229023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.229211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.229225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.229362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.229377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.027 [2024-06-11 03:55:50.229662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.027 [2024-06-11 03:55:50.229676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.027 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.229964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.229980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.230267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.230282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.230493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.230508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.230638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.230652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.230827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.230842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.231047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.231062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.231200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.231214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.231418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.231432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.231658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.231673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.231894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.231909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.232164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.232179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.232371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.232387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.232722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.232737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.232944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.232958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.233222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.233237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.233372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.233386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.233575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.233589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.233898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.233912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.234041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.234057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.234268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.234283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.234388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.234403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.234592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.234607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.234799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.234814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.235087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.235103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.235232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.235247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.235425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.235440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.235624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.235638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.235869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.235884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.236025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.236040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.236239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.236254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.236500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.236515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.236647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.236662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.236778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.236792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.236957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.236972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.237079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.237094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.237236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.237264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.237453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.237464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.237582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.237592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.028 qpair failed and we were unable to recover it. 00:59:09.028 [2024-06-11 03:55:50.237852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.028 [2024-06-11 03:55:50.237863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.238042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.238054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.238241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.238250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.238409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.238419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.238561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.238590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.238815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.238845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.239111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.239145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.239326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.239336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.239451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.239461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.239642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.239652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.239776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.239786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.240017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.240027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.240133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.240143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.240274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.240283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.240486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.240495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.240738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.240748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.240932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.240941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.241179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.241189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.241311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.241320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.241598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.241627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.241897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.241927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.242093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.242105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.242236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.242247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.242377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.242387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.242581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.242591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.242709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.242718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.242963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.242973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.243208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.243218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.243424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.243434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.243622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.243632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.243797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.243827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.244044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.244076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.244349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.244360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.244530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.244539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.244807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.244817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.245034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.245044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.245180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.245189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.245369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.245382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.245577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.029 [2024-06-11 03:55:50.245587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.029 qpair failed and we were unable to recover it. 00:59:09.029 [2024-06-11 03:55:50.245800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.245809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.246045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.246056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.246256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.246266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.246519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.246529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.246815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.246825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.247006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.247021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.247138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.247148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.247278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.247288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.247474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.247483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.247627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.247636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.247923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.247953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.248286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.248317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.248607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.248643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.248814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.248847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.249164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.249175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.249349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.249358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.249480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.249489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.249600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.249611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.249857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.249867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.250044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.250056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.250173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.250182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.250293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.250303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.250459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.250469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.250725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.250735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.250841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.250850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.251109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.251119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.251325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.251335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.251513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.251522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.251727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.251736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.251856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.251865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.252043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.252054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.252234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.252244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.030 [2024-06-11 03:55:50.252357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.030 [2024-06-11 03:55:50.252367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.030 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.252478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.252488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.252685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.252694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.252805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.252814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.252987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.252996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.253172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.253182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.253310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.253322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.253576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.253586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.253764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.253775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.253906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.253916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.254095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.254106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.254286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.254296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.254513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.254547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.254879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.254910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.255187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.255197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.255361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.255370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.255564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.255574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.255864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.255874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.255996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.256006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.256199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.256209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.256419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.256428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.256637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.256647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.256903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.256913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.257094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.257104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.257263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.257273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.257465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.257495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.257759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.257789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.258002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.258017] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.258191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.258201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.258315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.258325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.258446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.258455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.258615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.258624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.258828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.258838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.259028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.259038] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.259179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.259189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.259364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.259373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.259572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.259602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.259764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.259800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.260005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.031 [2024-06-11 03:55:50.260046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.031 qpair failed and we were unable to recover it. 00:59:09.031 [2024-06-11 03:55:50.260334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.260344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.260473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.260483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.260757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.260766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.260998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.261038] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.261240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.261271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.261434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.261466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.261779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.261810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.262059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.262096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.262318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.262328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.262452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.262461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.262694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.262704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.262871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.262880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.263137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.263147] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.263333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.263343] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.263523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.263533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.263703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.263713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.263939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.263949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.264223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.264255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.264496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.264527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.264791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.264822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.265086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.265096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.265215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.265224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.265402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.265411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.265589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.265599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.265771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.265780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.265967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.265977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.266175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.266184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.266363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.266374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.266477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.266487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.266746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.266776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.266979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.267040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.267221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.267231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.267356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.267366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.267488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.267498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.267799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.267809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.267980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.267990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.268267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.268278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.268510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.032 [2024-06-11 03:55:50.268519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.032 qpair failed and we were unable to recover it. 00:59:09.032 [2024-06-11 03:55:50.268647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.268657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.268841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.268850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.269103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.269114] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.269323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.269332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.269500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.269509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.269809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.269819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.269930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.269939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.270047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.270057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.270229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.270238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.270337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.270348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.270545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.270554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.270757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.270766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.270924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.270934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.271181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.271211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.271500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.271531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.271839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.271870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.272164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.272174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.272426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.272435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.272613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.272622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.272868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.272878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.273129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.273139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.273320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.273329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.273436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.273445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.273561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.273570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.273678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.273687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.273915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.273924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.274168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.274178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.274440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.274470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.274736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.274766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.275057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.275067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.275297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.275309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.275495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.275505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.275633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.275643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.275747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.275756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.275953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.275963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.276135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.276145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.276320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.276330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.276523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.276532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.033 qpair failed and we were unable to recover it. 00:59:09.033 [2024-06-11 03:55:50.276653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.033 [2024-06-11 03:55:50.276662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.276885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.276894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.277000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.277012] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.277176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.277185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.277300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.277309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.277478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.277487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.277615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.277624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.277797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.277807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.278054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.278064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.278291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.278301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.278503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.278512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.278802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.278814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.279072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.279082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.279269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.279279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.279474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.279483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.279591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.279601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.279850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.279860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.280132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.280141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.280411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.280421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.280671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.280681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.280958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.280968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.281148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.281158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.281318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.281327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.281495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.281504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.281799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.281828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.282054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.282084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.282231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.282241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.282448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.282458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.282625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.282636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.282861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.282870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.283060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.283069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.283313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.283323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.283481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.034 [2024-06-11 03:55:50.283491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.034 qpair failed and we were unable to recover it. 00:59:09.034 [2024-06-11 03:55:50.283737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.283747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.283949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.283959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.284187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.284197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.284375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.284384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.284562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.284571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.284747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.284756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.284928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.284937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.285136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.285146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.285321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.285331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.285505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.285514] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.285609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.285619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.285822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.285851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.286051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.286081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.286282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.286312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.286560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.286571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.286830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.286839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.287097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.287107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.287209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.287218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.287343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.287356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.287554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.287563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.287811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.287821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.288002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.288014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.288288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.288317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.288551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.288580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.288731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.288761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.289049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.289058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.289263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.289292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.289505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.289535] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.289808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.289837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.290171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.290202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.290375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.290385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.290658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.290687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.290894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.290923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.291139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.291170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.291440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.291470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.291809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.291838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.292132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.292162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.292455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.292465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.035 qpair failed and we were unable to recover it. 00:59:09.035 [2024-06-11 03:55:50.292699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.035 [2024-06-11 03:55:50.292709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.292930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.292940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.293190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.293200] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.293403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.293413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.293659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.293668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.293838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.293847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.294033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.294064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.294240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.294271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.294496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.294525] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.294790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.294820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.295113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.295143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.295437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.295446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.295626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.295636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.295879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.295888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.296163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.296194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.296493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.296523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.296759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.296788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.297059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.297089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.297370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.297380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.297573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.297583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.297848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.297883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.298115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.298145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.298383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.298393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.298574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.298603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.298827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.298856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.299065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.299102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.299297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.299307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.299476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.299485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.299672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.299702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.299911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.299940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.300188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.300218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.300461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.300471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.300740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.300749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.301025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.301055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.301259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.301289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.301526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.301535] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.301780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.301790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.302048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.302078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.036 [2024-06-11 03:55:50.302303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.036 [2024-06-11 03:55:50.302333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.036 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.302539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.302568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.302854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.302883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.303104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.303135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.303409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.303419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.303646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.303655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.303825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.303834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.304026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.304056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.304324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.304354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.304580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.304610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.304806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.304835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.305070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.305100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.305396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.305425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.305723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.305753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.306039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.306070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.306335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.306364] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.306681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.306710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.306872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.306901] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.307190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.307221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.307394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.307403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.307653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.307682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.307913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.307942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.308169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.308206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.308482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.308491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.308736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.308745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.308921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.308931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.309177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.309187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.309353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.309362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.309469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.309497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.309732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.309761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.310047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.310079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.310385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.310414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.310703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.310732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.311051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.311083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.311302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.311337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.311581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.311590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.311840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.311849] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.312020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.312030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.312265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.037 [2024-06-11 03:55:50.312294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.037 qpair failed and we were unable to recover it. 00:59:09.037 [2024-06-11 03:55:50.312555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.312585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.312863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.312893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.313125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.313135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.313296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.313325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.313457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.313487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.313701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.313731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.313959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.313989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.314247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.314277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.314554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.314563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.314819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.314829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.315089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.315099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.315267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.315276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.315444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.315474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.315750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.315779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.316003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.316041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.316349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.316378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.316643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.316673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.316902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.316932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.317246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.317273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.317508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.317518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.317816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.317847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.318089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.318100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.318323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.318353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.318574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.318610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.318846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.318877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.319160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.319193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.319465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.319496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.319767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.319797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.320007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.320049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.320339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.320370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.320578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.320589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.320856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.320887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.321144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.321176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.321324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.321337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.321569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.321600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.321820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.321851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.322145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.322156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.322335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.322347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.038 qpair failed and we were unable to recover it. 00:59:09.038 [2024-06-11 03:55:50.322580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.038 [2024-06-11 03:55:50.322613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.322905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.322936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.323231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.323263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.323476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.323507] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.323722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.323753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.323995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.324034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.324250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.324263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.324495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.324526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.324824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.324855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.325195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.325228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.325470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.325501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.325749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.325780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.325995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.326035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.326315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.326326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.326509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.326540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.326745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.326775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.327095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.327108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.327277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.327289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.327447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.327458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.327697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.327710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.327945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.327957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.328120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.328132] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.328335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.328366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.328518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.328549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.328819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.328850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.329107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.329145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.329372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.329403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.329551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.329582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.329827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.329858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.330155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.330188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.330425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.330437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.330716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.330747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.330994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.331036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.039 qpair failed and we were unable to recover it. 00:59:09.039 [2024-06-11 03:55:50.331270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.039 [2024-06-11 03:55:50.331283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.331443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.331454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.331545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.331555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.331800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.331812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.332090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.332102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.332276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.332308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.332611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.332642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.332854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.332885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.333144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.333177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.333404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.333435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.333589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.333620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.333900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.333931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.334194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.334207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.334411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.334443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.334714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.334745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.335037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.335070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.335329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.335341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.335595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.335606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.335815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.335846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.336171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.336203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.336458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.336490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.336692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.336722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.336944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.336975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.337252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.337284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.337538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.337551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.337726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.337738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.337937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.337968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.338223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.338255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.338545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.338557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.338807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.338820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.339053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.339065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.339234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.339246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.339474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.339489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.339654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.339686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.339865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.339896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.340191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.340225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.340520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.340553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.340773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.340804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.341026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.040 [2024-06-11 03:55:50.341059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.040 qpair failed and we were unable to recover it. 00:59:09.040 [2024-06-11 03:55:50.341301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.341331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.341557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.341588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.341897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.341927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.342217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.342249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.342494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.342525] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.342820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.342851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.343056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.343088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.343250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.343262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.343516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.343547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.343760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.343791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.344000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.344044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.344299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.344310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.344471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.344483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.344676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.344688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.344797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.344809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.344971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.344982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.345211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.345223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.345396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.345407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.345518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.345539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.345700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.345711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.345831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.345844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.346107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.346139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.346433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.346464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.346679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.346709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.346875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.346907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.347186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.347198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.347289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.347299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.347475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.347487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.347591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.347602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.347691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.347701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.347946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.347981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.348253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.348325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.348504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.348538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.348774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.348815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.349099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.349133] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.349307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.349338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.349638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.349682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.349849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.349880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.041 qpair failed and we were unable to recover it. 00:59:09.041 [2024-06-11 03:55:50.350037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.041 [2024-06-11 03:55:50.350069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.350280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.350312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.350517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.350548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.350757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.350788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.351106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.351138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.351382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.351412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.351629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.351661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.351960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.351991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.352250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.352282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.352520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.352552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.352721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.352753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.352969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.353000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.353158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.353175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.353378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.353412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.353593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.353625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.353791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.353822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.354094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.354136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.354319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.354333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.354614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.354645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.354820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.354851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.355084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.355117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.355319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.355349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.355561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.355632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.355803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.355836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.356112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.356144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.356329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.356345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.356531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.356547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.356771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.356787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.356968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.356985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.357103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.357120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.357374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.357406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.357622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.357654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.357903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.357935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.358111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.358143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.358377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.358408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.358667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.358683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.358815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.358832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.359119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.359154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.042 [2024-06-11 03:55:50.359386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.042 [2024-06-11 03:55:50.359416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.042 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.359557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.359569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.359695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.359707] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.359813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.359851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.360086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.360119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.360336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.360367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.360572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.360584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.360689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.360700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.360858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.360869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.361149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.361182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.361393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.361425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.361607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.361638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.361917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.361948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.362164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.362176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.362345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.362357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.362575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.362605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.362766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.362797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.363029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.363060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.363278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.363309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.363494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.363506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.363679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.363691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.363962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.363974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.364083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.364093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.364290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.364321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.364454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.364490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.364764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.364795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.365060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.365093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.365363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.365395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.365659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.365671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.365811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.365822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.365942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.365953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.366060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.366071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.366304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.366335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.043 qpair failed and we were unable to recover it. 00:59:09.043 [2024-06-11 03:55:50.366487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.043 [2024-06-11 03:55:50.366518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.366734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.366765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.366978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.367008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.367173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.367185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.367253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.367275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.367389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.367399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.367555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.367585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.367877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.367907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.368175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.368208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.368423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.368453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.368713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.368747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.368966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.368996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.369161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.369193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.369415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.369445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.369713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.369744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.370055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.370087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.370305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.370336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.370572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.370602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.370758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.370790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.371091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.371123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.371278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.371309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.371429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.371447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.371617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.371648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.371918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.371949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.372050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.372081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.372378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.372389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.372614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.372626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.372820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.372832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.373071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.373101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.373317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.373347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.373501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.373532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.373749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.373785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.373929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.373959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.374194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.374230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.374400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.374412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.374590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.374621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.374896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.374926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.375142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.375175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.044 [2024-06-11 03:55:50.375445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.044 [2024-06-11 03:55:50.375475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.044 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.375740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.375751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.375869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.375881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.376056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.376089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.376229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.376260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.376480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.376510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.376657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.376669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.376847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.376878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.377099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.377131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.377352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.377363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.377562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.377592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.377882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.377913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.378134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.378165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.378382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.378413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.378705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.378736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.379031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.379063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.379228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.379259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.379460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.379490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.379781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.379811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.380024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.380056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.380181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.380211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.380474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.380486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.380597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.380629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.380916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.380946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.381213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.381244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.381394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.381406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.381581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.381612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.381765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.381796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.382034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.382066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.382209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.382221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.382377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.382388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.382613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.382624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.382792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.382804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.383023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.383050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.383305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.383337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.383552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.383582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.383851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.383882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.384093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.384124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.045 [2024-06-11 03:55:50.384415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.045 [2024-06-11 03:55:50.384446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.045 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.384729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.384741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.384982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.384993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.385236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.385247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.385432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.385443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.385695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.385725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.386029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.386061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.386346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.386358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.386563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.386574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.386775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.386786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.387066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.387098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.387332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.387363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.387649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.387679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.387945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.387975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.388295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.388328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.388596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.388607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.388771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.388783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.388980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.389019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.389256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.389287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.389524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.389535] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.389773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.389784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.390019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.390051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.390283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.390315] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.390575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.390586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.390808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.390819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.390933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.390944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.391168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.391180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.391409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.391440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.391660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.391691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.391987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.392027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.392339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.392371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.392585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.392597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.392787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.392799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.392987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.392999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.393239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.393251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.393443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.393457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.393578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.393590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.393753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.393780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.394028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.046 [2024-06-11 03:55:50.394060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.046 qpair failed and we were unable to recover it. 00:59:09.046 [2024-06-11 03:55:50.394357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.047 [2024-06-11 03:55:50.394369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.047 qpair failed and we were unable to recover it. 00:59:09.047 [2024-06-11 03:55:50.394557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.047 [2024-06-11 03:55:50.394569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.047 qpair failed and we were unable to recover it. 00:59:09.047 [2024-06-11 03:55:50.394766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.047 [2024-06-11 03:55:50.394778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.047 qpair failed and we were unable to recover it. 00:59:09.047 [2024-06-11 03:55:50.395014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.047 [2024-06-11 03:55:50.395026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.047 qpair failed and we were unable to recover it. 00:59:09.047 [2024-06-11 03:55:50.395282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.047 [2024-06-11 03:55:50.395294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.047 qpair failed and we were unable to recover it. 00:59:09.047 [2024-06-11 03:55:50.395473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.047 [2024-06-11 03:55:50.395503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.047 qpair failed and we were unable to recover it. 00:59:09.047 [2024-06-11 03:55:50.395857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.047 [2024-06-11 03:55:50.395888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.047 qpair failed and we were unable to recover it. 00:59:09.047 [2024-06-11 03:55:50.396171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.047 [2024-06-11 03:55:50.396205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.047 qpair failed and we were unable to recover it. 00:59:09.047 [2024-06-11 03:55:50.396515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.047 [2024-06-11 03:55:50.396547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.047 qpair failed and we were unable to recover it. 00:59:09.047 [2024-06-11 03:55:50.396819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.047 [2024-06-11 03:55:50.396830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.047 qpair failed and we were unable to recover it. 00:59:09.325 [2024-06-11 03:55:50.397066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.325 [2024-06-11 03:55:50.397078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.325 qpair failed and we were unable to recover it. 00:59:09.325 [2024-06-11 03:55:50.397304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.325 [2024-06-11 03:55:50.397316] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.325 qpair failed and we were unable to recover it. 00:59:09.325 [2024-06-11 03:55:50.397585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.325 [2024-06-11 03:55:50.397612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.325 qpair failed and we were unable to recover it. 00:59:09.325 [2024-06-11 03:55:50.397846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.325 [2024-06-11 03:55:50.397857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.325 qpair failed and we were unable to recover it. 00:59:09.325 [2024-06-11 03:55:50.398059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.325 [2024-06-11 03:55:50.398071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.325 qpair failed and we were unable to recover it. 00:59:09.325 [2024-06-11 03:55:50.398293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.325 [2024-06-11 03:55:50.398305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.325 qpair failed and we were unable to recover it. 00:59:09.325 [2024-06-11 03:55:50.398463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.325 [2024-06-11 03:55:50.398475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.398673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.398684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.398851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.398864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.399112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.399124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.399372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.399384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.399545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.399557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.399763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.399775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.399964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.399976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.400179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.400191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.400380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.400392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.400570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.400582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.400821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.400833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.400946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.400958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.401138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.401150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.401329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.401341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.401608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.401638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.401850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.401881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.402098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.402129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.402296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.402308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.402433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.402445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.402669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.402682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.402851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.402862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.403139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.403151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.403380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.403411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.403705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.403736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.403955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.403985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.404299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.404330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.404553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.404584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.404897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.404928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.405075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.405107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.405395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.405426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.405703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.405734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.406048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.406080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.406302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.406333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.406554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.406585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.406852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.406882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.407141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.407173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.407366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.407378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.326 qpair failed and we were unable to recover it. 00:59:09.326 [2024-06-11 03:55:50.407633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.326 [2024-06-11 03:55:50.407665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.407977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.408008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.408271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.408303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.408630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.408661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.408949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.408979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.409197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.409229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.409465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.409496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.409716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.409747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.409952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.409983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.410147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.410179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.410447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.410479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.410794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.410824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.411104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.411136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.411450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.411480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.411733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.411763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.412004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.412046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.412265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.412296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.412519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.412551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.412797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.412829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.413043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.413075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.413310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.413341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.413645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.413676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.413888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.413923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.414196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.414228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.414449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.414479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.414782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.414813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.415030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.415062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.415290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.415321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.415543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.415575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.415888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.415900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.416124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.416136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.416296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.416307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.416499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.416529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.416760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.416791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.416924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.416955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.417242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.417274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.417495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.417526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.417740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.417771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.327 qpair failed and we were unable to recover it. 00:59:09.327 [2024-06-11 03:55:50.418066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.327 [2024-06-11 03:55:50.418098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.418360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.418391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.418593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.418624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.418873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.418884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.419114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.419127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.419422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.419453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.419739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.419770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.419991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.420031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.420278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.420309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.420463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.420494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.420710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.420741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.420966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.420998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.421278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.421309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.421517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.421529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.421714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.421745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.422044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.422076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.422347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.422378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.422535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.422566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.422796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.422827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.423064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.423097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.423322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.423353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.423571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.423602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.423856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.423868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.424093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.424106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.424277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.424290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.424384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.424394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.424652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.424683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.425003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.425045] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.425333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.425363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.425659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.425690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.425958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.425989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.426316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.426347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.426667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.426698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.426970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.427001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.427283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.427315] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.427616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.427647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.427941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.427973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.428231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.428263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.328 [2024-06-11 03:55:50.428557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.328 [2024-06-11 03:55:50.428589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.328 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.428802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.428833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.429125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.429158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.429430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.429462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.429601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.429632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.429793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.429805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.430022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.430056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.430360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.430391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.430662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.430674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.430880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.430892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.431159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.431192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.431417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.431449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.431730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.431761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.432033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.432104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.432446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.432481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.432703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.432735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.432954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.432986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.433221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.433254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.433546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.433578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.433871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.433902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.434112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.434144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.434327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.434358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.434648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.434679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.434962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.434996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.435174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.435228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.435505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.435536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.435691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.435722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.435950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.435982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.436255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.436289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.436510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.436523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.436759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.436790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.437088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.437121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.437330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.437342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.437487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.437518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.437775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.437806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.438007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.438048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.438363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.438394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.438663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.438694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.438894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.329 [2024-06-11 03:55:50.438926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.329 qpair failed and we were unable to recover it. 00:59:09.329 [2024-06-11 03:55:50.439124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.439156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.439437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.439471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.439767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.439798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.440074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.440106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.440311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.440341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.440640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.440671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.440852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.440884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.441119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.441151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.441316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.441347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.441612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.441629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.441842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.441859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.442063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.442080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.442296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.442313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.442551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.442568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.442704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.442721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.442852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.442869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.443059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.443092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.443379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.443415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.443638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.443670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.443876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.443908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.444230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.444263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.444436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.444467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.444773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.444790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.445058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.445091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.445400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.445432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.445725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.445756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.445966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.445998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.446327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.446359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.446586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.446624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.446805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.446822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.446974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.330 [2024-06-11 03:55:50.446990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.330 qpair failed and we were unable to recover it. 00:59:09.330 [2024-06-11 03:55:50.447206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.447242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.447471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.447503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.447701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.447733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.447905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.447936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.448149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.448182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.448394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.448425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.448671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.448702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.448877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.448908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.449236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.449269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.449550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.449581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.449893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.449924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.450156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.450188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.450456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.450468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.450617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.450629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.450823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.450836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.451031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.451043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.451253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.451285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.451516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.451547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.451771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.451784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.452044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.452079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.452286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.452317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.452540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.452572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.452722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.452754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.453000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.453055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.453220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.453252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.453555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.453567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.453742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.453755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.453921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.453933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.454136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.454149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.454402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.454414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.454700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.454713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.454913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.454925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.455087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.455099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.455371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.455403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.455562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.455594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.455823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.455854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.456077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.456109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.456409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.456446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.331 qpair failed and we were unable to recover it. 00:59:09.331 [2024-06-11 03:55:50.456753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.331 [2024-06-11 03:55:50.456784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.456994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.457034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.457349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.457381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.457657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.457688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.457960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.457991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.458308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.458341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.458607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.458639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.458864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.458895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.459120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.459153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.459425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.459456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.459702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.459737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.459912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.459924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.460066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.460099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.460327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.460358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.460652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.460682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.460912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.460944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.461783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.461802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.462073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.462085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.462338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.462350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.462587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.462600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.462777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.462790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.463051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.463064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.463317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.463330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.463610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.463622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.463851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.463864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.464040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.464052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.464237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.464249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.464489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.464502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.464674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.464688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.464871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.464884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.465170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.465183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.465386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.465399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.465657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.465669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.465849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.465861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.466137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.466150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.466388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.466400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.466594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.466606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.466880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.466892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.332 [2024-06-11 03:55:50.467105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.332 [2024-06-11 03:55:50.467118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.332 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.467304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.467319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.467478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.467491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.467771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.467783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.467971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.467984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.468184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.468196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.468372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.468384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.468566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.468578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.468777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.468789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.469019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.469032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.469225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.469237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.469491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.469503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.469612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.469626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.469855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.469867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.470042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.470055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.470271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.470283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.470498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.470510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.470675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.470687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.470798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.470809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.470936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.470948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.471203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.471216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.471388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.471400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.471505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.471517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.471754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.471766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.471878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.471888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.472198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.472210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.472382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.472394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.472596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.472608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.472786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.472798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.472910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.472923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.473085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.473099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.473337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.473350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.473603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.473615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.473819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.473831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.474021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.474034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.474294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.474307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.474468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.474481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.474655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.474667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.474945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.333 [2024-06-11 03:55:50.474957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.333 qpair failed and we were unable to recover it. 00:59:09.333 [2024-06-11 03:55:50.475128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.475141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.475339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.475352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.475559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.475597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.475832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.475864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.476045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.476078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.476382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.476414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.476686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.476717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.476917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.476929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.477082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.477094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.477294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.477306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.477579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.477591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.477856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.477868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.478131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.478144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.478329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.478342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.478501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.478512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.478630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.478641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.478846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.478858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.479060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.479072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.479344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.479356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.479541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.479553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.479847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.479859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.480035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.480047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.480158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.480168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.480394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.480406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.480523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.480533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.480681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.480692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.480891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.480903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.481018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.481029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.481279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.481292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.481549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.481561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.481832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.481844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.481964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.481976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.482175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.334 [2024-06-11 03:55:50.482187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.334 qpair failed and we were unable to recover it. 00:59:09.334 [2024-06-11 03:55:50.482349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.482361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.482542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.482554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.482802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.482815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.482969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.482981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.483178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.483190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.483343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.483355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.483538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.483551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.483832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.483863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.484135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.484169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.484337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.484383] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.484604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.484636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.484835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.484848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.485021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.485034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.485309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.485322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.485455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.485466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.485711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.485723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.486019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.486031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.486295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.486329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.486454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.486467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.486654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.486685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.486930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.486962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.487232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.487266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.487505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.487539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.487828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.487840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.488014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.488028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.488263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.488275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.488456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.488468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.488634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.488646] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.488914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.488926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.489138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.489150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.489421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.489433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.489678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.489690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.489917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.489930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.490102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.490115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.490277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.490289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.490466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.490479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.490682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.490693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.335 [2024-06-11 03:55:50.490932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.335 [2024-06-11 03:55:50.490944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.335 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.491139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.491152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.491378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.491389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.491570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.491582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.491753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.491765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.492044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.492056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.492308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.492320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.492414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.492425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.492652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.492664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.492839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.492851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.492966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.492976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.493204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.493216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.493336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.493351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.493572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.493584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.493763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.493774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.493949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.493961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.494163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.494176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.494333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.494345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.494546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.494558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.494748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.494760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.494947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.494960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.495124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.495136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.495388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.495400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.495605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.495617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.495923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.495934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.496114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.496127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.496289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.496302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.496473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.496485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.496663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.496675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.496847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.496860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.497108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.497121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.497399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.497411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.497592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.497603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.497779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.497791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.497927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.497938] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.498176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.498188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.498453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.498465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.498718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.498729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.498910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.336 [2024-06-11 03:55:50.498922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.336 qpair failed and we were unable to recover it. 00:59:09.336 [2024-06-11 03:55:50.499097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.499109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.499335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.499347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.499532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.499545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.499731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.499743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.499991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.500003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.500264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.500277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.500449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.500461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.500734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.500746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.500998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.501017] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.501213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.501224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.501340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.501351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.501529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.501540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.501784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.501795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.501939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.501951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.502069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.502081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.502264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.502302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.502600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.502633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.502865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.502877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.503111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.503123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.503351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.503363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.503545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.503557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.503804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.503816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.503984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.503996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.504121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.504133] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.504464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.504496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.504670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.504711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.504951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.504963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.505065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.505077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.505327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.505339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.505446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.505456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.505665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.505677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.505791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.505801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.506027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.506040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.506225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.506237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.506415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.506446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.506649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.506684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.506972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.507003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.507306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.507355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.337 qpair failed and we were unable to recover it. 00:59:09.337 [2024-06-11 03:55:50.507565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.337 [2024-06-11 03:55:50.507597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.507802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.507833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.508090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.508117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.508239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.508257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.508473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.508505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.508728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.508759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.509057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.509089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.509299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.509331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.509557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.509587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.509800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.509817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.509984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.510001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.510230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.510262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.510421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.510454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.510750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.510780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.511016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.511034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.511225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.511243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.511387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.511403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.511591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.511608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.511858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.511890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.512099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.512132] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.512427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.512457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.512722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.512753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.512913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.512944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.513165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.513196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.513459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.513476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.513711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.513728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.513867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.513885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.514141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.514172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.514383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.514414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.514631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.514673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.514987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.515004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.515187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.515204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.515471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.515487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.515697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.515714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.515978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.516019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.516232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.516263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.516480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.516523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.516705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.516721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.516835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.338 [2024-06-11 03:55:50.516851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.338 qpair failed and we were unable to recover it. 00:59:09.338 [2024-06-11 03:55:50.516977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.516994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.517269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.517286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.517408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.517424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.517543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.517560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.517756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.517787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.517992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.518034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.518252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.518283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.518438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.518481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.518769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.518786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.519021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.519038] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.519227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.519244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.519350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.519367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.519546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.519563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.519740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.519757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.519887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.519904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.520104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.520136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.520358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.520388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.520626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.520656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.520861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.520878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.521051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.521068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.521256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.521273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.521452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.521468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.521669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.521686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.521851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.521882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.522049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.522080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.522300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.522331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.522543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.522559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.522750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.522766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.522965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.522981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.523202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.523235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.523458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.523490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.523696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.523732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.523873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.523889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.339 qpair failed and we were unable to recover it. 00:59:09.339 [2024-06-11 03:55:50.524066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.339 [2024-06-11 03:55:50.524082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.524219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.524235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.524359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.524376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.524511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.524527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.524804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.524835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.524981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.525018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.525165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.525195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.525371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.525402] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.525675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.525706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.525931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.525962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.526258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.526289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.526506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.526522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.526718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.526735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.526911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.526948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.527160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.527191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.527320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.527351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.527521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.527552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.527838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.527869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.528141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.528174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.528322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.528353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.528508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.528540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.528683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.528714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.529022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.529054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.529287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.529318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.529464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.529495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.529665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.529701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.529914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.529931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.530058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.530074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.530269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.530299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.530558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.530588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.530793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.530823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.531095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.531131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.531327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.531358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.531519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.531536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.531729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.531760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.532033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.532065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.532363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.532394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.532604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.532635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.532881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.532912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.340 qpair failed and we were unable to recover it. 00:59:09.340 [2024-06-11 03:55:50.533052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.340 [2024-06-11 03:55:50.533069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.533207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.533223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.533478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.533495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.533640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.533657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.533924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.533941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.534111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.534128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.534247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.534263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.534380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.534396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.534628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.534644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.534903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.534920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.535091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.535107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.535213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.535228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.535533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.535564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.535828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.535845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.536032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.536049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.536337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.536353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.536542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.536559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.536747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.536763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.536882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.536899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.537079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.537096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.537279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.537310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.537556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.537587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.537744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.537775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.538050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.538081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.538288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.538318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.538545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.538576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.538816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.538832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.538955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.538975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.539130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.539158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.539404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.539421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.539589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.539606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.539721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.539738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.540002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.540042] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.540326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.540357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.540517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.540548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.540749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.540780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.541048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.541080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.541353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.541384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.541622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.341 [2024-06-11 03:55:50.541638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.341 qpair failed and we were unable to recover it. 00:59:09.341 [2024-06-11 03:55:50.541830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.541846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.542090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.542121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.542348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.542379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.542584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.542600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.542727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.542743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.543026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.543057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.543277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.543308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.543528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.543563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.543764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.543796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.544036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.544053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.544257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.544288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.544505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.544536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.544703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.544743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.544861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.544877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.545058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.545075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.545190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.545210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.545410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.545426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.545658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.545676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.545839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.545856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.545964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.545981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.546166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.546198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.546446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.546477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.546589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.546620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.546769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.546807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.546926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.546942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.547045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.547060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.547198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.547241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.547546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.547576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.547776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.547792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.547985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.548016] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.548202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.548216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.548374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.548387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.548556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.548568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.548729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.548741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.548912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.548924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.549172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.549186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.549375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.549389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.549556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.549568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.549680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.342 [2024-06-11 03:55:50.549692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.342 qpair failed and we were unable to recover it. 00:59:09.342 [2024-06-11 03:55:50.549905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.549917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.550094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.550106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.550261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.550273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.550388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.550406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.550659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.550693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.550911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.550944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.551157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.551169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.551288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.551299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.551448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.551461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.551536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.551547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.551792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.551804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.551980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.551992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.552157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.552171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.552262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.552292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.552429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.552462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.552623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.552655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.552877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.552888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.553071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.553085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.553259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.553271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.553506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.553538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.553833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.553868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.554034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.554060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.554217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.554229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.554331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.554341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.554511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.554523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.554699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.554711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.554874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.554885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.554989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.555001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.555188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.555223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.555391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.555426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.555649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.555718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.555961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.555998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.556198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.556217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.556417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.556434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.556623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.556640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.556730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.556745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.556850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.343 [2024-06-11 03:55:50.556866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.343 qpair failed and we were unable to recover it. 00:59:09.343 [2024-06-11 03:55:50.556972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.557001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.557222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.557253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.557389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.557420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.557629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.557671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.557784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.557800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.558069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.558086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.558299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.558330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.558515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.558546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.558794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.558811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.559006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.559027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.559236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.559252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.559440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.559456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.559721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.559738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.559932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.559963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.560221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.560253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.560470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.560500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.560611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.560642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.560908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.560939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.561139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.561156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.561291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.561308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.561418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.561437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.561669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.561686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.561826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.561842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.561957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.561972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.562084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.562096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.562225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.562235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.562337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.562349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.562511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.562523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.562635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.562647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.562815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.562827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.562987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.562998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.563118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.563130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.563338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.563349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.344 [2024-06-11 03:55:50.563457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.344 [2024-06-11 03:55:50.563469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.344 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.563659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.563690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.563948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.563984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.564214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.564246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.564469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.564500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.564762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.564773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.565004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.565020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.565201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.565213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.565413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.565424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.565533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.565544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.565640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.565650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.565817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.565829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.565932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.565943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.566100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.566112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.566272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.566284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.566449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.566460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.566738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.566750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.566867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.566879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.566962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.566972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.567142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.567154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.567273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.567304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.567588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.567620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.567894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.567907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.568018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.568028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.568197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.568209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.568329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.568342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.568504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.568515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.568737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.568778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.568941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.568972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.569204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.569237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.569519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.569533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.569701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.569712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.569888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.569899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.569967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.569977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.570084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.570096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.570280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.570291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.570406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.570418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.570513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.570523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.570631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.345 [2024-06-11 03:55:50.570643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.345 qpair failed and we were unable to recover it. 00:59:09.345 [2024-06-11 03:55:50.570828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.570871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.571080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.571111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.571342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.571374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.571530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.571561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.571714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.571746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.571903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.571935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.572086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.572097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.572261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.572273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.572386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.572397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.572569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.572601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.572802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.572833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.573099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.573131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.573359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.573390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.573615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.573647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.573785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.573815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.574024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.574036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.574170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.574182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.574326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.574338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.574517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.574528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.574747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.574777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.575045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.575077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.575284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.575318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.575533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.575563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.575692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.575703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.575864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.575876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.576044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.576056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.576192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.576204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.576305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.576316] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.576426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.576463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.576695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.576726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.576961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.576992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.577316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.577350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.577521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.577552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.577833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.577845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.577944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.577955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.578127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.578139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.578243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.578255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.578422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.578452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.346 qpair failed and we were unable to recover it. 00:59:09.346 [2024-06-11 03:55:50.578587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.346 [2024-06-11 03:55:50.578618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.578754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.578786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.579085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.579123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.579392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.579423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.579634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.579665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.579879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.579910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.580118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.580130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.580357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.580369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.580571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.580602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.580802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.580832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.581067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.581080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.581244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.581257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.581481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.581493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.581687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.581699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.581809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.581821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.582071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.582083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.582203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.582215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.582410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.582452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.582672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.582704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.582911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.582927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.583101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.583118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.583222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.583238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.583355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.583372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.583559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.583575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.583771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.583787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.583901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.583917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.584160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.584199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.584508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.584543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.584839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.584870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.585028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.585059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.585222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.585257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.585419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.585449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.585749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.585780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.586095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.586128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.586331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.586361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.586567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.586598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.586809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.586840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.587070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.587091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.587271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.347 [2024-06-11 03:55:50.587288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.347 qpair failed and we were unable to recover it. 00:59:09.347 [2024-06-11 03:55:50.587549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.587566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.587692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.587708] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.587837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.587854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.588022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.588036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.588209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.588240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.588465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.588497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.588736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.588767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.588926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.588957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.589231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.589263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.589485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.589516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.589727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.589757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.589980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.590038] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.590340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.590371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.590580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.590612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.590907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.590938] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.591092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.591129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.591325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.591337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.591525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.591555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.591707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.591743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.591949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.591979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.592308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.592347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.592478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.592498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.592687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.592718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.593026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.593058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.593331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.593362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.593592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.593639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.593885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.593902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.594037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.594054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.594246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.594278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.594548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.594578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.594787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.594818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.595047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.595079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.595381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.348 [2024-06-11 03:55:50.595413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.348 qpair failed and we were unable to recover it. 00:59:09.348 [2024-06-11 03:55:50.595625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.595657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.595953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.595983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.596187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.596200] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.596375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.596406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.596657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.596689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.596834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.596865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.597070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.597102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.597323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.597353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.597629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.597660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.597814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.597845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.598048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.598078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.598322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.598352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.598662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.598698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.598929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.598977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.599221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.599254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.599433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.599463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.599626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.599656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.599936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.599967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.600248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.600260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.600500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.600530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.600743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.600774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.600984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.600996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.601254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.601286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.601553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.601583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.601740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.601771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.602000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.602050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.602349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.602380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.602533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.602564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.602727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.602758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.602990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.603030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.603297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.603328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.603618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.603649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.603801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.603812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.604052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.604084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.604304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.604334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.604537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.604568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.604726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.604756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.605024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.349 [2024-06-11 03:55:50.605055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.349 qpair failed and we were unable to recover it. 00:59:09.349 [2024-06-11 03:55:50.605200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.605211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.605444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.605476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.605691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.605722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.605961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.605994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.606202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.606232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.606437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.606467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.606700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.606732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.606992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.607031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.607178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.607189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.607389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.607420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.607603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.607633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.607843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.607873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.608159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.608170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.608282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.608293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.608525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.608566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.608701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.608733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.608944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.608975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.609115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.609126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.609244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.609256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.609448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.609479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.609755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.609786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.609943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.609974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.610166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.610177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.610458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.610489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.610636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.610666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.610883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.610914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.611066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.611098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.611340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.611369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.611667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.611698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.612022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.612054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.612338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.612369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.612590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.612621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.612833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.612844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.612956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.612967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.613214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.613245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.613475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.613506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.613751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.613782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.613929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.350 [2024-06-11 03:55:50.613959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.350 qpair failed and we were unable to recover it. 00:59:09.350 [2024-06-11 03:55:50.614122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.614141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.614307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.614319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.614485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.614496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.614634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.614665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.614880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.614911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.615127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.615159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.615434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.615465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.615768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.615799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.616026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.616057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.616272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.616302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.616519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.616550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.616822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.616859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.617036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.617048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.617227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.617257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.617492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.617523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.617752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.617783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.618087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.618125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.618325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.618337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.618517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.618547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.618754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.618785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.619028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.619060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.619256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.619267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.619421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.619432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.619627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.619664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.619933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.619964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.620243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.620275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.620479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.620509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.620707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.620738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.620956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.620986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.621291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.621323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.621485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.621516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.621750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.621781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.621941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.621972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.622251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.622283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.622432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.622463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.622668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.622700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.622916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.622946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.623164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.351 [2024-06-11 03:55:50.623196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.351 qpair failed and we were unable to recover it. 00:59:09.351 [2024-06-11 03:55:50.623400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.623429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.623647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.623677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.623845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.623875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.624118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.624150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.624414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.624444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.624707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.624776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.624934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.624969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.625206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.625239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.625542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.625573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.625811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.625828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.626041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.626059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.626209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.626222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.626397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.626428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.626594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.626626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.626768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.626798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.627016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.627034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.627152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.627183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.627393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.627424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.627626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.627656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.627985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.628026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.628203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.628234] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.628452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.628484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.628612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.628642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.628844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.628874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.629097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.629128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.629350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.629381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.629654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.629684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.629908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.629919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.630108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.630139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.630339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.630370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.630579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.630611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.630926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.630956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.631105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.631138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.631295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.631327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.631561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.631591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.631802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.631833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.632053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.632085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.632288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.632318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.352 [2024-06-11 03:55:50.632614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.352 [2024-06-11 03:55:50.632644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.352 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.632856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.632868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.632978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.633025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.633251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.633282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.633498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.633528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.633826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.633857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.634103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.634134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.634304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.634341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.634614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.634645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.634818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.634829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.635002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.635045] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.635259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.635289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.635428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.635459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.635726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.635757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.636021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.636032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.636279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.636310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.636554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.636584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.636880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.636911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.637206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.637237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.637539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.637570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.637734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.637765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.637981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.638022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.638228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.638259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.638474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.638504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.638774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.638805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.639020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.639051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.639381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.639412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.639646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.639677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.639928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.639959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.640266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.640299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.640572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.640603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.640812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.640843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.641069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.353 [2024-06-11 03:55:50.641101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.353 qpair failed and we were unable to recover it. 00:59:09.353 [2024-06-11 03:55:50.641306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.641336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.641577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.641608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.641835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.641866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.642001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.642015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.642196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.642228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.642520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.642551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.642853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.642884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.643116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.643136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.643399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.643430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.643645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.643677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.643886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.643917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.644231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.644243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.644469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.644480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.644663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.644694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.644990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.645035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.645195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.645226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.645464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.645495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.645698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.645729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.645869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.645899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.646181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.646213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.646511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.646543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.646763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.646794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.646991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.647002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.647203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.647236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.647506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.647536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.647702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.647732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.647879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.647910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.648113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.648145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.648424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.648455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.648663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.648692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.648917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.648947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.649138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.649150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.649328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.649359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.649510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.649541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.649761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.649791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.354 qpair failed and we were unable to recover it. 00:59:09.354 [2024-06-11 03:55:50.649934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.354 [2024-06-11 03:55:50.649945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.650187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.650200] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.650385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.650417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.650629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.650660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.650821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.650852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.650986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.650997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.651181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.651220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.651432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.651462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.651706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.651737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.651860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.651871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.652075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.652107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.652325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.652356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.652570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.652600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.652841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.652872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.653133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.653145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.653398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.653428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.653648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.653679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.653824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.653835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.654108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.654120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.654289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.654302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.654478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.654489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.654595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.654636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.654923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.654953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.655238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.655270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.655430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.655461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.655616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.655646] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.655885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.655915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.656206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.656217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.656314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.656324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.656584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.656595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.656719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.656730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.656975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.656987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.657151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.657162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.657323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.657335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.657478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.657509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.657747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.657778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.658064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.658096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.658313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.658343] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.658573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.658605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.658826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.658857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.355 qpair failed and we were unable to recover it. 00:59:09.355 [2024-06-11 03:55:50.659061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.355 [2024-06-11 03:55:50.659093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.659305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.659317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.659574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.659604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.659763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.659793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.659956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.659985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.660214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.660226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.660403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.660434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.660678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.660709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.660927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.660957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.661226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.661237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.661484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.661495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.661677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.661688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.661805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.661816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.661923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.661934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.662199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.662231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.662444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.662475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.662689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.662719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.662920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.662931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.663036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.663046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.663224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.663267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.663543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.663574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.663787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.663817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.664047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.664059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.664310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.664344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.664577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.664607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.664720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.664751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.664890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.664920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.665055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.665068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.665226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.665257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.665412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.665442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.665707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.665738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.665949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.665961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.666161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.666194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.666440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.666471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.666791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.666822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.667111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.667122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.667324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.667354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.667583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.667614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.667910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.667941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.668169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.668200] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.668416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.668447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.668664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.356 [2024-06-11 03:55:50.668694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.356 qpair failed and we were unable to recover it. 00:59:09.356 [2024-06-11 03:55:50.668840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.668871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.669088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.669119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.669258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.669270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.669485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.669517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.669724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.669756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.669967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.669998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.670221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.670233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.670334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.670373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.670641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.670672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.670875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.670906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.671054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.671085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.671224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.671254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.671473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.671504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.671711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.671742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.671952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.671983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.672239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.672270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.672472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.672502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.672716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.672752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.673028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.673059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.673333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.673363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.673565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.673596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.673800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.673831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.674131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.674162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.674417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.674428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.674583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.674595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.674812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.674823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.674979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.674990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.675099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.675109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.675355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.675367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.675548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.675559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.675671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.675683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.675844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.675856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.676040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.676072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.676230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.676260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.676557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.357 [2024-06-11 03:55:50.676587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.357 qpair failed and we were unable to recover it. 00:59:09.357 [2024-06-11 03:55:50.676798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.676828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.677044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.677075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.677220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.677231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.677349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.677377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.677519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.677549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.677763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.677793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.678000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.678040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.678319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.678350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.678625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.678656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.678915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.678946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.679103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.679135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.679425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.679456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.679607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.679637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.679929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.679961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.680184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.680196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.680359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.680390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.680693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.680723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.680890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.680921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.681216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.681247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.681473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.681503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.681725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.681755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.681975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.682005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.682290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.682327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.682466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.682477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.682646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.682676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.682946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.682980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.683265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.683297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.683519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.683550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.683762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.683793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.683944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.683975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.684151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.684196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.684374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.684385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.684479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.684489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.684654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.684666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.684847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.684877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.685116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.685148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.685358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.685370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.685545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.685576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.685790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.685821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.686046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.686077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.686293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.686304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.686414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.686425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.686599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.686631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.686841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.358 [2024-06-11 03:55:50.686871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.358 qpair failed and we were unable to recover it. 00:59:09.358 [2024-06-11 03:55:50.687087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.687099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.687267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.687279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.687537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.687568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.687729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.687759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.687962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.687992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.688206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.688237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.688457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.688487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.688634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.688664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.688885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.688917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.689191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.689223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.689491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.689522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.689742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.689773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.689932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.689943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.690057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.690068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.690245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.690256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.690423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.690454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.690679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.690710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.690921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.690953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.691251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.691289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.691434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.691446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.691694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.691725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.691887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.691918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.692128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.692139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.692302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.692313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.692474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.692504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.692721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.692752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.693027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.693059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.693339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.693350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.693521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.693532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.693713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.693743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.694066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.694098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.694245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.694276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.694501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.694532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.694766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.694797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.695044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.695077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.695332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.695363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.695657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.695687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.695955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.695986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.696278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.696310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.696603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.696633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.696843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.696873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.697069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.697101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.697335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.697366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.697594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.697625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.697779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.359 [2024-06-11 03:55:50.697810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.359 qpair failed and we were unable to recover it. 00:59:09.359 [2024-06-11 03:55:50.698041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.698074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.698307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.698337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.698518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.698549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.698823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.698859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.699086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.699098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.699270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.699302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.699512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.699543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.699809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.699839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.700060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.700093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.700301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.700331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.700601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.700633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.700849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.700880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.701151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.701183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.701401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.701438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.701575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.701605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.701900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.701930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.702133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.702145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.702233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.702243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.702433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.702446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.702673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.702704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.702990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.703029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.703359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.703390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.703569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.703601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.703823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.703854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.704119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.704131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.704234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.704245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.704433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.704463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.704609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.704640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.704771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.704801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.705030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.705062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.705356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.705387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.705610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.705641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.705786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.705816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.706045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.706075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.706303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.706334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.706565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.706576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.706859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.706872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.706981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.706991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.707194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.707206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.707334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.707345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.707514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.707546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.707769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.707799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.707963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.707994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.708219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.708250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.360 qpair failed and we were unable to recover it. 00:59:09.360 [2024-06-11 03:55:50.708398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.360 [2024-06-11 03:55:50.708409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.361 qpair failed and we were unable to recover it. 00:59:09.361 [2024-06-11 03:55:50.708566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.361 [2024-06-11 03:55:50.708578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.361 qpair failed and we were unable to recover it. 00:59:09.361 [2024-06-11 03:55:50.708670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.361 [2024-06-11 03:55:50.708681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.361 qpair failed and we were unable to recover it. 00:59:09.361 [2024-06-11 03:55:50.708860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.361 [2024-06-11 03:55:50.708872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.361 qpair failed and we were unable to recover it. 00:59:09.361 [2024-06-11 03:55:50.708982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.361 [2024-06-11 03:55:50.708992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.361 qpair failed and we were unable to recover it. 00:59:09.361 [2024-06-11 03:55:50.709232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.361 [2024-06-11 03:55:50.709244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.361 qpair failed and we were unable to recover it. 00:59:09.361 [2024-06-11 03:55:50.709429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.361 [2024-06-11 03:55:50.709460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.361 qpair failed and we were unable to recover it. 00:59:09.361 [2024-06-11 03:55:50.709607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.361 [2024-06-11 03:55:50.709637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.361 qpair failed and we were unable to recover it. 00:59:09.361 [2024-06-11 03:55:50.709847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.361 [2024-06-11 03:55:50.709878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.361 qpair failed and we were unable to recover it. 00:59:09.361 [2024-06-11 03:55:50.710030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.361 [2024-06-11 03:55:50.710068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.361 qpair failed and we were unable to recover it. 00:59:09.361 [2024-06-11 03:55:50.710339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.361 [2024-06-11 03:55:50.710351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.361 qpair failed and we were unable to recover it. 00:59:09.361 [2024-06-11 03:55:50.710519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.361 [2024-06-11 03:55:50.710530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.361 qpair failed and we were unable to recover it. 00:59:09.361 [2024-06-11 03:55:50.710703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.361 [2024-06-11 03:55:50.710714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.361 qpair failed and we were unable to recover it. 00:59:09.361 [2024-06-11 03:55:50.710826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.361 [2024-06-11 03:55:50.710839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.361 qpair failed and we were unable to recover it. 00:59:09.361 [2024-06-11 03:55:50.711006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.361 [2024-06-11 03:55:50.711025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.361 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.711197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.711208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.711452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.711464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.711648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.711660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.711775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.711786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.711972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.711983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.712164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.712176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.712279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.712288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.712447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.712460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.712691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.712703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.712827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.712840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.712971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.712981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.713228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.713240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.713444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.713456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.713649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.713661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.713756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.713768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.713993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.714006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.714178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.714189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.714317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.714328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.714594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.714607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.714724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.714736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.714905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.714917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.715054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.715064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.715263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.715275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.715471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.715483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.715577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.715587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.715844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.715875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.716030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.716061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.716195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.716225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.640 [2024-06-11 03:55:50.716422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.640 [2024-06-11 03:55:50.716434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.640 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.716612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.716642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.716864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.716894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.717190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.717222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.717487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.717518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.717753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.717783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.718000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.718055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.718257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.718288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.718494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.718524] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.718740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.718770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.718888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.718919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.719204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.719236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.719433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.719444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.719623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.719635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.719862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.719892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.720127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.720156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.720453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.720482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.720697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.720728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.721025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.721057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.721281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.721311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.721547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.721578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.721855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.721888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.722042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.722074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.722244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.722255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.722367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.722377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.722537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.722549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.722802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.722813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.722914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.722925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.723023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.723034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.723193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.723205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.723365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.723378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.723473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.723483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.723584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.723594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.723692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.723703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.723794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.723804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.723916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.723928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.724110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.724122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.724222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.724232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.724349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.641 [2024-06-11 03:55:50.724360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.641 qpair failed and we were unable to recover it. 00:59:09.641 [2024-06-11 03:55:50.724587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.724618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.724796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.724827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.724994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.725040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.725177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.725207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.725476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.725508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.725701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.725731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.725891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.725922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.726085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.726122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.726361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.726392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.726540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.726552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.726708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.726720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.726892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.726905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.727065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.727077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.727235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.727246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.727356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.727368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.727463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.727474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.727576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.727587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.727698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.727709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.727885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.727917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.728197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.728235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.728524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.728556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.728706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.728737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.728956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.728989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.729174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.729187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.729279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.729290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.729547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.729579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.729747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.729778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.730083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.730121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.730332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.730344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.730516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.730547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.730718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.730751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.731008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.731033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.731230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.731262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.731480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.731513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.731745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.731779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.732076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.732115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.732345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.732377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.732669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.642 [2024-06-11 03:55:50.732694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.642 qpair failed and we were unable to recover it. 00:59:09.642 [2024-06-11 03:55:50.732879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.732911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.733122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.733135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.733224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.733235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.733394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.733406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.733522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.733552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.733778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.733815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.733991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.734032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.734232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.734243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.734334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.734346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.734534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.734549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.734725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.734757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.734987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.735030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.735256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.735268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.735504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.735516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.735615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.735625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.735800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.735811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.735916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.735928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.736036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.736046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.736278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.736290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.736529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.736542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.736665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.736676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.736773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.736783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.736891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.736903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.737079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.737091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.737273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.737308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.737474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.737506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.737775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.737810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.738032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.738068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.738363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.738397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.738649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.738660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.738760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.738771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.738896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.738910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.739086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.739098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.739252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.739264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.739426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.739456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.739751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.739788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.739999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.740046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.740250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.643 [2024-06-11 03:55:50.740262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.643 qpair failed and we were unable to recover it. 00:59:09.643 [2024-06-11 03:55:50.740439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.740451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.740560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.740571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.740799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.740811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.740980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.740992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.741092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.741103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.741331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.741343] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.741515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.741526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.741760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.741792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.742093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.742129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.742365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.742397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.742672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.742703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.742961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.742991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.743284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.743296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.743413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.743444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.743712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.743744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.743963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.743997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.744231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.744264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.744482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.744494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.744586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.744596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.744778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.744790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.744977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.745030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.745201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.745232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.745503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.745538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.745684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.745720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.746024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.746061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.746241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.746254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.746445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.746456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.746690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.746702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.746889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.746919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.747059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.747092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.747459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.644 [2024-06-11 03:55:50.747497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.644 qpair failed and we were unable to recover it. 00:59:09.644 [2024-06-11 03:55:50.747760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.747798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.747922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.747941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.748200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.748219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.748414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.748431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.748625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.748642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.748810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.748826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.748957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.748974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.749152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.749174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.749346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.749360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.749461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.749471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.749638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.749650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.749769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.749781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.749947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.749958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.750183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.750195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.750307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.750321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.750409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.750419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.750597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.750609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.750767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.750779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.750882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.750893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.751051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.751063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.751155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.751166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.751280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.751291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.751472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.751483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.751678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.751690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.751801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.751832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.751978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.752023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.752231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.752261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.752400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.752412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.752664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.752676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.752777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.752787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.753062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.753094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.753309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.753340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.753610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.753642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.753926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.753958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.754329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.754366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.754640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.754660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.754851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.754868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.755141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.755175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.645 qpair failed and we were unable to recover it. 00:59:09.645 [2024-06-11 03:55:50.755340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.645 [2024-06-11 03:55:50.755371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.755660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.755691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.755854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.755886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.756060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.756094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.756232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.756264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.756391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.756421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.756654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.756670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.756844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.756875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.757028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.757060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.757273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.757311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.757438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.757455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.757637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.757654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.757821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.757837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.757956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.757972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.758072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.758084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.758271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.758303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.758454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.758484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.758687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.758716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.758885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.758918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.759073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.759112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.759393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.759428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.759572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.759588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.759700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.759717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.759941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.759958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.760265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.760306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.760588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.760618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.760841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.760871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.761089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.761122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.761427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.761443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.761674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.761691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.761820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.761865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.762036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.762068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.762291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.762322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.762523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.762539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.762708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.762724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.762816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.762831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.763095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.763118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.763268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.763283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.646 [2024-06-11 03:55:50.763460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.646 [2024-06-11 03:55:50.763477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.646 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.763599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.763615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.763791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.763807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.764032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.764064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.764270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.764300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.764447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.764487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.764675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.764691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.764810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.764827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.764942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.764958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.765087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.765103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.765235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.765251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.765383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.765400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.765519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.765559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.765763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.765793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.766090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.766125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.766306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.766323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.766451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.766467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.766570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.766585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.766698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.766713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.766915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.766931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.767102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.767119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.767242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.767259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.767419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.767459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.767725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.767756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.767908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.767939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.768218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.768252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.768474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.768505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.768715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.768732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.768958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.768974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.769237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.769253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.769433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.769450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.769658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.769674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.769818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.769834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.770027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.770058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.770274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.770304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.770606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.770622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.770740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.770756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.770880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.770896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.771138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.647 [2024-06-11 03:55:50.771177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.647 qpair failed and we were unable to recover it. 00:59:09.647 [2024-06-11 03:55:50.771327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.771357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.771569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.771600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.771809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.771840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.772049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.772081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.772289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.772330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.772587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.772603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.772841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.772857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.773051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.773068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.773237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.773253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.773404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.773420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.773618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.773650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.773787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.773818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.774060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.774091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.774368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.774384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.774602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.774618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.774739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.774755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.774934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.774965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.775208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.775239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.775466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.775496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.775705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.775736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.775936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.775967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.776245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.776276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.776502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.776519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.776629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.776645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.776881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.776897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.777081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.777098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.777249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.777266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.777507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.777539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.777766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.777799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.778037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.778070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.778216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.778228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.778474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.778487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.778603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.778615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.778771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.778784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.778956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.778986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.779215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.779246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.779449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.779481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.779694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.648 [2024-06-11 03:55:50.779727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.648 qpair failed and we were unable to recover it. 00:59:09.648 [2024-06-11 03:55:50.779946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.779977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.780137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.780159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.780285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.780302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.780482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.780498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.780653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.780670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.780858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.780889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.781046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.781064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.781176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.781193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.781380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.781412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.781574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.781605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.781767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.781798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.781975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.782007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.782167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.782199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.782424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.782455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.782614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.782645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.782956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.782990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.783257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.783289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.783461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.783492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.783687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.783703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.783831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.783847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.784086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.784116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.784387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.784418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.784711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.784741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.784958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.784975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.785217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.785233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.785433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.785449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.785626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.785639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.785809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.785820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.786078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.786118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.786370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.786403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.786576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.786609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.786831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.786862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.649 [2024-06-11 03:55:50.787075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.649 [2024-06-11 03:55:50.787107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.649 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.787400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.787435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.787732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.787763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.788038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.788070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.788238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.788274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.788523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.788563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.788683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.788694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.788796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.788806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.789034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.789046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.789227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.789239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.789411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.789422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.789613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.789625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.789897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.789909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.790083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.790094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.790260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.790292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.790611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.790647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.790871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.790902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.791149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.791183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.791392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.791408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.791615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.791631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.791747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.791761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.791943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.791959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.792146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.792163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.792296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.792311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.792506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.792537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.792690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.792721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.792930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.792961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.793119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.793152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.793319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.793357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.793595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.793607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.793706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.793717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.793940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.793951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.794068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.794079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.794273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.794285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.794414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.794425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.794617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.794648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.794785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.794822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.794980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.795023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.650 [2024-06-11 03:55:50.795177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.650 [2024-06-11 03:55:50.795210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.650 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.795411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.795427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.795550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.795566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.795691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.795704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.795929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.795941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.796167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.796179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.796350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.796361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.796539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.796570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.796839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.796870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.797075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.797107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.797323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.797334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.797505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.797516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.797618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.797628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.797876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.797888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.798007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.798023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.798182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.798193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.798425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.798438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.798623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.798636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.798861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.798873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.799033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.799045] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.799221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.799233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.799386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.799397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.799558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.799570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.799674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.799685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.799798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.799810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.800093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.800166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.800305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.800325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.800585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.800602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.800700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.800716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.800846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.800863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.801070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.801088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.801294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.801307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.801484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.801495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.801668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.801680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.801868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.801880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.801971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.801983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.802153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.802166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.802290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.802302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.802474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.651 [2024-06-11 03:55:50.802488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.651 qpair failed and we were unable to recover it. 00:59:09.651 [2024-06-11 03:55:50.802698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.802741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.802902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.802935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.803144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.803181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.803340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.803372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.803472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.803484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.803736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.803749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.803894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.803905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.804164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.804199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.804375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.804418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.804702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.804735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.804890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.804921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.805149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.805181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.805363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.805394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.805667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.805698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.805873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.805904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.806172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.806204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.806346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.806376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.806540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.806582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.806821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.806837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.807020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.807035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.807158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.807188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.807358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.807388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.807682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.807713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.807886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.807917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.808218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.808250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.808404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.808435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.808658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.808680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.808772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.808786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.808910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.808926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.809122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.809139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.809349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.809364] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.809485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.809496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.809586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.809597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.809759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.809771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.809955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.809989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.810302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.810343] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.810557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.810591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.810812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.810825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.652 qpair failed and we were unable to recover it. 00:59:09.652 [2024-06-11 03:55:50.811080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.652 [2024-06-11 03:55:50.811115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.811252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.811264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.811516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.811548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.811789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.811821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.812034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.812067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.812229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.812260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.812485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.812516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.812679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.812710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.812906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.812937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.813152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.813164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.813396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.813427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.813748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.813779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.814064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.814096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.814243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.814274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.814552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.814583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.814805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.814817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.815048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.815080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.815296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.815327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.815545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.815575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.815807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.815838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.816076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.816108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.816348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.816379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.816592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.816624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.816813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.816824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.817008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.817024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.817243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.817282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.817468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.817487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.817675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.817708] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.817858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.817898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.818218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.818260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.818390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.818407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.818522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.818565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.818823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.653 [2024-06-11 03:55:50.818854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.653 qpair failed and we were unable to recover it. 00:59:09.653 [2024-06-11 03:55:50.819154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.819186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.819487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.819518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.819733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.819764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.819980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.820020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.820185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.820216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.820340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.820370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.820576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.820606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.820799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.820828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.821074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.821106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.821335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.821367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.821639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.821680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.821963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.821994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.822220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.822252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.822417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.822448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.822658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.822674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.822803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.822834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.822954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.822984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.823223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.823256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.823457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.823488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.823633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.823649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.823822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.823852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.823999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.824039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.824259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.824290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.824522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.824538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.824656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.824686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.824912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.824943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.825222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.825254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.825467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.825498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.825708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.825739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.826075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.826106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.826324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.826355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.826626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.826659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.826880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.826911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.827203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.827235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.827440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.827471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.827685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.827723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.828023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.654 [2024-06-11 03:55:50.828054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.654 qpair failed and we were unable to recover it. 00:59:09.654 [2024-06-11 03:55:50.828264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.828295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.828569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.828600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.828839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.828855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.829120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.829137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.829392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.829422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.829693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.829724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.829939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.829969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.830136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.830168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.830386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.830417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.830686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.830716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.830858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.830888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.831036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.831067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.831335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.831366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.831510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.831539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.831828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.831844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.832013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.832029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.832201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.832232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.832395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.832426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.832718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.832748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.833038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.833069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.833218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.833249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.833505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.833521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.833701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.833717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.833830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.833847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.834033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.834064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.834388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.834458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.834641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.834660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.834863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.834931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.835192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.835229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.835455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.835487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.835717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.835748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.835900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.835932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.836153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.836186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.836411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.836441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.836661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.836671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.836893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.836905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.837030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.837059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.837225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.837256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.655 [2024-06-11 03:55:50.837461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.655 [2024-06-11 03:55:50.837501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.655 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.837711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.837722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.837986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.838028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.838255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.838286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.838578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.838609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.838780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.838811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.839049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.839082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.839329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.839359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.839571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.839602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.839829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.839840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.840027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.840059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.840361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.840392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.840688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.840718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.840977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.841008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.841313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.841344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.841664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.841695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.841888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.841919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.842165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.842196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.842355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.842366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.842585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.842616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.842765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.842796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.843065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.843097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.843256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.843287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.843510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.843541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.843685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.843715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.843988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.844028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.844247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.844259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.844423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.844435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.844548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.844560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.844669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.844680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.844788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.844799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.844995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.845032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.845166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.845197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.845409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.845441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.845608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.845619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.845789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.845820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.846030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.846062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.846286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.846317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.656 [2024-06-11 03:55:50.846524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.656 [2024-06-11 03:55:50.846535] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.656 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.846613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.846623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.846785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.846799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.847008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.847061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.847283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.847315] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.847522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.847553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.847706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.847737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.847896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.847925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.848147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.848179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.848393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.848423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.848563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.848593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.848738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.848769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.848988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.849030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.849250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.849281] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.849474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.849505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.849720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.849750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.849963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.849995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.850226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.850258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.850496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.850512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.850632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.850648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.850772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.850813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.850961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.850992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.851266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.851298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.851566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.851582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.851842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.851859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.852043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.852060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.852347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.852378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.852608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.852638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.852783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.852799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.852924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.852959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.853184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.853216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.853393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.853424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.853706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.853736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.853977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.854007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.854227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.854257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.854491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.854521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.854743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.854774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.854912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.854942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.855170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.657 [2024-06-11 03:55:50.855202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.657 qpair failed and we were unable to recover it. 00:59:09.657 [2024-06-11 03:55:50.855354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.855370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.855564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.855594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.855812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.855843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.855992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.856039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.856258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.856288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.856437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.856468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.856606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.856622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.856793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.856809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.857043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.857075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.857349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.857392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.857585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.857602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.857839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.857856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.857982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.857998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.858243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.858260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.858469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.858500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.858741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.858772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.858997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.859034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.859324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.859355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.859530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.859560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.859776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.859792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.860050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.860081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.860227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.860257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.860475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.860491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.860628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.860658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.860860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.860890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.861089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.861121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.861347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.861378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.861531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.861561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.861845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.861875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.862082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.862113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.862392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.862423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.862646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.862677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.658 [2024-06-11 03:55:50.862876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.658 [2024-06-11 03:55:50.862907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.658 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.863120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.863152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.863363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.863394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.863638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.863668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.863869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.863900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.864137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.864169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.864458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.864475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.864603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.864619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.864765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.864795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.865034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.865066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.865225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.865256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.865520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.865540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.865781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.865811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.866049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.866081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.866319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.866350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.866549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.866565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.866680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.866711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.866933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.866964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.867150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.867181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.867342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.867373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.867552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.867582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.867846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.867863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.867971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.867987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.868175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.868206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.868414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.868445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.868683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.868721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.868899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.868915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.869008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.869029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.869231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.869247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.869420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.869436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.869621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.869651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.869870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.869900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.870103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.870135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.870403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.870433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.870658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.870688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.870876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.870893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.871140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.871156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.871289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.871305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.871490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.659 [2024-06-11 03:55:50.871520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.659 qpair failed and we were unable to recover it. 00:59:09.659 [2024-06-11 03:55:50.871736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.871767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.872052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.872083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.872245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.872276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.872398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.872428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.872643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.872659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.872902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.872932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.873096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.873127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.873276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.873307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.873516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.873546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.873750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.873780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.874005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.874046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.874194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.874225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.874426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.874457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.874712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.874743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.874949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.874979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.875266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.875337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.875570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.875605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.875842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.875873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.876096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.876131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.876298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.876329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.876556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.876587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.876862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.876878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.877075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.877093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.877269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.877285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.877473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.877504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.877726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.877757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.878040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.878112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.878367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.878402] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.878567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.878598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.878799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.878829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.879053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.879084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.879249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.879280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.879551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.879583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.879857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.879888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.880192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.880224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.880539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.880569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.880865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.880895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.881107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.881139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.660 qpair failed and we were unable to recover it. 00:59:09.660 [2024-06-11 03:55:50.881298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.660 [2024-06-11 03:55:50.881329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.881475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.881512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.881811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.881841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.882141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.882172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.882320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.882351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.882565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.882582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.882792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.882809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.882932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.882948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.883187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.883204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.883410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.883440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.883713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.883744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.884026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.884057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.884199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.884229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.884467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.884497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.884710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.884726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.884846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.884877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.885039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.885071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.885307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.885337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.885552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.885583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.885816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.885846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.886069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.886101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.886321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.886352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.886651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.886682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.886892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.886922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.887079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.887110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.887349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.887379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.887529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.887545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.887721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.887751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.888008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.888091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.888348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.888384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.888551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.888568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.888744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.888776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.889050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.889083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.889239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.889269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.889535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.889566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.889786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.889817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.890033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.890065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.890210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.890240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.661 [2024-06-11 03:55:50.890528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.661 [2024-06-11 03:55:50.890560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.661 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.890827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.890843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.891106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.891124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.891413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.891452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.891600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.891631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.891833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.891863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.892031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.892062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.892229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.892263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.892496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.892512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.892632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.892649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.892839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.892870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.893057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.893090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.893256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.893287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.893453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.893470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.893668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.893699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.893854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.893885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.894137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.894170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.894329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.894360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.894595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.894625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.895377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.895400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.895598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.895615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.895784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.895802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.896021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.896056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.896216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.896248] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.896408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.896439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.896708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.896726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.896847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.896864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.897062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.897094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.897309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.897341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.897498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.897530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.897723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.897793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.898031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.898044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.898312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.898344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.898567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.898599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.898739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.898772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.898989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.899034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.899183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.899214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.899434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.899466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.662 [2024-06-11 03:55:50.899616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.662 [2024-06-11 03:55:50.899648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.662 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.899850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.899880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.900082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.900115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.900360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.900391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.900660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.900692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.900858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.900902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.901085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.901118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.901389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.901421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.901576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.901607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.901757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.901788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.902001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.902043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.902177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.902208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.902354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.902365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.902462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.902474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.902564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.902576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.902803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.902815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.902934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.902946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.903141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.903173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.903399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.903431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.903650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.903683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.903905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.903936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.904118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.904150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.904446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.904478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.904623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.904664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.904823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.904835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.905026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.905058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.905287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.905318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.905535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.905568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.905786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.905817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.905970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.906002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.906150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.906181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.906399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.906430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.906642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.906715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.663 [2024-06-11 03:55:50.906964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.663 [2024-06-11 03:55:50.906998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.663 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.907247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.907280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.907487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.907504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.907684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.907715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.907925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.907957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.908129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.908162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.908393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.908425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.908731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.908762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.908914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.908945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.909090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.909124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.909266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.909298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.909435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.909477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.909658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.909674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.909850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.909867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.910035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.910070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.910267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.910298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.910523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.910554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.910718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.910749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.910977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.911008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.911230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.911261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.911498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.911529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.911803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.911813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.912099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.912131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.912298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.912329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.912469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.912500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.912717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.912748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.912958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.912992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.913163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.913195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.913405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.913436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.913577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.913594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.913709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.913725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.913924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.913940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.914046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.914059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.914166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.914177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.914352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.914363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.914484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.914515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.914729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.914760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.914979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.915018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.915224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.664 [2024-06-11 03:55:50.915256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.664 qpair failed and we were unable to recover it. 00:59:09.664 [2024-06-11 03:55:50.915394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.915424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.915583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.915614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.915888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.915920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.916128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.916160] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.916386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.916417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.916669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.916700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.916854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.916886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.917101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.917133] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.917266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.917296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.917508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.917540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.917743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.917774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.917920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.917951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.918109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.918142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.918357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.918388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.918689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.918720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.918993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.919031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.919253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.919285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.919427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.919459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.919682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.919693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.919808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.919819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.919900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.919911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.920068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.920080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.920206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.920237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.920442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.920474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.920746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.920776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.921045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.921077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.921221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.921253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.921472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.921508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.921803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.921834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.922043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.922075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.922326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.922358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.922471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.922482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.922597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.922608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.922703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.922714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.922969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.923000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.923168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.923199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.923417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.923448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.923605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.665 [2024-06-11 03:55:50.923637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.665 qpair failed and we were unable to recover it. 00:59:09.665 [2024-06-11 03:55:50.923840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.923870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.924090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.924123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.924261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.924291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.924521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.924553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.924694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.924724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.924871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.924882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.924972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.924984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.925102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.925134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.925408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.925439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.925662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.925693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.925893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.925924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.926082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.926114] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.926398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.926429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.926702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.926713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.926818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.926829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.927061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.927093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.927304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.927335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.927538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.927569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.927838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.927850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.927947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.927978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.928145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.928177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.928380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.928412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.928615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.928645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.928843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.928854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.929025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.929057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.929215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.929246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.929447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.929478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.929658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.929669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.929880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.929891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.930070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.930107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.930312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.930344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.930563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.930594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.930760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.930791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.930946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.930977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.931218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.931255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.931527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.931558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.931778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.931809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.932018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.932050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.666 qpair failed and we were unable to recover it. 00:59:09.666 [2024-06-11 03:55:50.932253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.666 [2024-06-11 03:55:50.932285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.932526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.932557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.932711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.932743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.932986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.933042] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.933199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.933231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.933368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.933399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.933633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.933665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.933799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.933830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.934045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.934077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.934375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.934407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.934619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.934651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.934944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.934975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.935257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.935290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.935501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.935532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.935750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.935761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.935919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.935930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.936059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.936071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.936245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.936277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.936490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.936522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.936765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.936796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.937068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.937079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.937208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.937219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.937450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.937482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.937684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.937715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.937876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.937916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.938082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.938094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.938195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.938227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.938394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.938425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.938641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.938672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.938930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.938941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.939036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.939048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.939231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.939267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.939483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.939515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.667 qpair failed and we were unable to recover it. 00:59:09.667 [2024-06-11 03:55:50.939729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.667 [2024-06-11 03:55:50.939740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.939938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.939949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.940117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.940129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.940305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.940316] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.940409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.940421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.940590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.940621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.940842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.940873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.941092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.941125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.941326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.941357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.941558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.941589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.941790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.941821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.942058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.942090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.942299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.942331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.942602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.942634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.942794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.942805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.943038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.943069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.943363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.943394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.943628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.943659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.943871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.943902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.944137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.944168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.944394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.944425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.944631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.944662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.944860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.944871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.944961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.944972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.945218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.945250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.945493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.945524] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.945659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.945690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.945906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.945917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.946130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.946141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.946237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.946249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.946436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.946447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.946575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.946587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.946700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.946732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.946944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.946975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.947295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.947327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.947574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.947606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.947812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.947823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.947990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.668 [2024-06-11 03:55:50.948028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.668 qpair failed and we were unable to recover it. 00:59:09.668 [2024-06-11 03:55:50.948267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.948308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.948536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.948567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.948724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.948735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.948833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.948844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.949067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.949080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.949236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.949247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.949410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.949422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.949662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.949693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.949945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.949977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.950139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.950172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.950342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.950373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.950573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.950604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.950830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.950861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.951105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.951138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.951379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.951410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.951610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.951641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.951861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.951893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.952097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.952108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.952266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.952278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.952376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.952407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.952612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.952644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.952863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.952894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.953165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.953198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.953366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.953397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.953663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.953694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.953982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.953993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.954241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.954253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.954427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.954466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.954630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.954661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.954880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.954911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.955082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.955131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.955349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.955380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.955687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.955698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.955816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.955827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.955994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.956034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.956281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.956313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.956484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.956516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.956719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.956750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.956903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.956934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.957132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.957143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.957403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.957440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.669 qpair failed and we were unable to recover it. 00:59:09.669 [2024-06-11 03:55:50.957716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.669 [2024-06-11 03:55:50.957747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.957977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.958008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.958174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.958204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.958444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.958475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.958673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.958685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.958849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.958860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.959116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.959148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.959361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.959392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.959607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.959637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.959833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.959844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.960018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.960049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.960258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.960291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.960432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.960463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.960627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.960658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.960885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.960917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.961248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.961280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.961508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.961539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.961748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.961779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.961997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.962035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.962175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.962207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.962369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.962400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.962700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.962731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.962961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.962992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.963162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.963195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.963479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.963510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.963733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.963764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.964088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.964121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.964392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.964423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.964577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.964607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.964762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.964801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.964966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.964977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.965209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.965241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.965510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.965542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.965758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.965769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.965952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.965983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.966146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.966177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.966390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.966421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.966572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.966603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.966806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.966818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.966989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.967033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.967234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.967265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.967482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.670 [2024-06-11 03:55:50.967513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.670 qpair failed and we were unable to recover it. 00:59:09.670 [2024-06-11 03:55:50.967760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.967771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.967859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.967871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.968059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.968091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.968390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.968421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.968581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.968611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.968912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.968949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.969096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.969128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.969375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.969407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.969704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.969735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.969961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.969993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.970298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.970329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.970591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.970622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.970900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.970931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.971135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.971147] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.971399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.971411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.971590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.971601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.971774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.971785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.971941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.971952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.972160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.972172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.972347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.972358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.972450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.972461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.972650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.972661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.972927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.972939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.973113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.973125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.973297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.973309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.973481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.973492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.973686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.973698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.973868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.973880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.974024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.974037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.974146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.974159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.974358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.974370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.974545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.974557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.974726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.974737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.974963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.974975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.975150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.975162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.975346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.975358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.671 [2024-06-11 03:55:50.975476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.671 [2024-06-11 03:55:50.975487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.671 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.975578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.975592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.975777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.975789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.975903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.975935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.976149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.976183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.976342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.976373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.976521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.976560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.976796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.976808] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.976902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.976915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.977029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.977041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.977147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.977159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.977263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.977274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.977456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.977468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.977585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.977616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.977853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.977890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.978051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.978084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.978299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.978330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.978562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.978593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.978891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.978922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.979123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.979135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.979327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.979359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.979519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.979549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.979772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.979805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.980046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.980062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.980237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.980250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.980410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.980422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.980529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.980541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.980703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.980716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.980972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.980985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.981159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.981173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.981292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.981304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.981417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.981429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.981642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.981674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.981843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.981875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.982177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.982214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.982422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.982453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.982703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.982735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.982950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.982985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.983209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.983221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.983395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.983429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.983636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.983668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.983827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.983866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.984088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.984125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.672 [2024-06-11 03:55:50.984341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.672 [2024-06-11 03:55:50.984372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.672 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.984648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.984681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.984864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.984896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.985074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.985086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.985241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.985253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.985425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.985458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.985688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.985720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.985868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.985899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.986133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.986145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.986236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.986248] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.986376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.986407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.986626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.986659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.986878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.986921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.987101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.987113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.987288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.987300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.987469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.987504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.987721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.987753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.987976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.988007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.988233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.988245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.988495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.988506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.988698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.988710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.988885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.988896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.989061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.989094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.989262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.989294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.989506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.989538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.989721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.989734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.989971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.989983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.990187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.990199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.990358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.990371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.990575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.990607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.990750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.990781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.990950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.990981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.991256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.991268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.991378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.991389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.991493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.991505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.991597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.991608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.991809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.991839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.992018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.992050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.992200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.992243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.992390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.992422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.992605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.992636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.992790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.992821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.993045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.993057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.993178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.993210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.993423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.673 [2024-06-11 03:55:50.993455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.673 qpair failed and we were unable to recover it. 00:59:09.673 [2024-06-11 03:55:50.993726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.993758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.993965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.993996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.994164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.994195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.994411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.994423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.994516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.994527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.994688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.994700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.994878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.994890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.994998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.995014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.995106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.995118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.995213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.995225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.995483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.995494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.995600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.995612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.995703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.995714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.995891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.995902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.996076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.996088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.996407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.996418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.996617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.996628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.996746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.996784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.997021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.997053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.997299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.997339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.997503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.997573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.997823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.997857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.998064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.998082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.998253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.998270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.998458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.998474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.998691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.998722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.998935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.998967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.999273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.999306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.999467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.999498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:50.999716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:50.999747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:51.000002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:51.000024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:51.000304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:51.000320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:51.000422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:51.000453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:51.000605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:51.000637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:51.000784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:51.000817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:51.001028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:51.001046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:51.001228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:51.001245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:51.001429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:51.001446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:51.001570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:51.001586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:51.001824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:51.001840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:51.002014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:51.002030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:51.002138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:51.002155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:51.002323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:51.002340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:51.002508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:51.002525] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.674 [2024-06-11 03:55:51.002700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.674 [2024-06-11 03:55:51.002717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.674 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.002881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.002893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.003019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.003031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.003140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.003153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.003295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.003307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.003505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.003545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.003771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.003803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.004072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.004086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.004259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.004294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.004593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.004624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.004791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.004822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.005057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.005089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.005321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.005352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.005583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.005614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.005905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.005944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.006173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.006205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.006400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.006431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.006586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.006618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.006831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.006862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.007090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.007122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.007278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.007309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.007464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.007495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.007721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.007752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.008046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.008058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.008176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.008187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.008362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.008393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.008528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.008559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.008728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.008759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.008918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.008929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.009119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.009151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.009306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.009338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.009552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.009583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.009795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.009826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.009994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.010045] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.010259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.010291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.010503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.010533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.010839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.010871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.011111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.011143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.011465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.011496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.675 [2024-06-11 03:55:51.011718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.675 [2024-06-11 03:55:51.011749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.675 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.011983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.012023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.012242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.012273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.012407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.012438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.012655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.012692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.012933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.012964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.013152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.013185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.013409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.013440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.013748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.013780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.013929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.013960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.014233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.014245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.014421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.014452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.014679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.014710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.014925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.014956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.015220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.015252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.015417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.015448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.015722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.015753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.016058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.016090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.016237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.016268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.016516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.016547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.016748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.016780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.017048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.017081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.017279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.017310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.017509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.017540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.017785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.017816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.017980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.018019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.018232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.018243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.018342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.018354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.018476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.018507] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.018721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.018752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.018911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.018942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.019150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.019162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.019254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.019265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.019443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.019455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.019753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.019784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.019931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.019962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.020127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.020139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.020310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.020321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.020413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.020424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.020671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.020682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.020907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.020918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.021149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.021181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.021395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.021426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.021634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.021665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.676 [2024-06-11 03:55:51.021966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.676 [2024-06-11 03:55:51.021979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.676 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.022143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.022155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.022315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.022327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.022554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.022565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.022721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.022733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.022914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.022926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.023013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.023025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.023194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.023206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.023306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.023318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.023442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.023454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.023700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.023711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.023963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.023974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.024102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.024114] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.024229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.024240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.024392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.024404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.024518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.024529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.024621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.024633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.024809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.024820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.677 [2024-06-11 03:55:51.024928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.677 [2024-06-11 03:55:51.024939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.677 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.025132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.025143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.025377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.025389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.025510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.025521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.025613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.025624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.025816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.025827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.026007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.026023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.026137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.026148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.026400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.026411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.026583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.026594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.026761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.026773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.027003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.027018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.027246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.027257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.027364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.027375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.027550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.027562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.027736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.027748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.027975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.027987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.028150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.028162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.028274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.028286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.028532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.028544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.028723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.028734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.028834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.028845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.029011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.029024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.029134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.029146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.956 [2024-06-11 03:55:51.029248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.956 [2024-06-11 03:55:51.029259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.956 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.029436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.029447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.029567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.029578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.029756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.029768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.029883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.029914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.030129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.030161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.030312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.030343] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.030543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.030574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.030814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.030845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.030989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.031028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.031255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.031267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.031413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.031444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.031735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.031766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.032050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.032062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.032223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.032235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.032362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.032393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.032680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.032711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.032912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.032950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.033111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.033123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.033232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.033244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.033342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.033354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.033529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.033559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.033706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.033736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.033943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.033974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.034141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.034152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.034364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.034396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.034689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.034720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.034871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.034902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.035109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.035143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.035364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.035395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.035511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.035542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.035762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.035793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.036004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.036044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.036211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.036243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.036465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.036496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.036644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.036675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.036889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.036920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.037125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.037137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.957 [2024-06-11 03:55:51.037246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.957 [2024-06-11 03:55:51.037283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.957 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.037574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.037606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.037761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.037791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.038054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.038065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.038239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.038250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.038497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.038529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.038738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.038769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.039049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.039061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.039230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.039241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.039401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.039432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.039752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.039783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.040000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.040014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.040246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.040257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.040464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.040494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.040772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.040804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.041025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.041037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.041209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.041241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.041389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.041420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.041709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.041740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.041885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.041897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.042082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.042113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.042413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.042444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.042712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.042743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.042958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.042989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.043213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.043245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.043512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.043543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.043779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.043810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.044084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.044154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.044420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.044458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.044675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.044708] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.044872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.044903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.045066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.045110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.045304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.045320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.045503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.045519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.045712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.045745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.045912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.045943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.046181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.046214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.046458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.046490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.046700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.958 [2024-06-11 03:55:51.046731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.958 qpair failed and we were unable to recover it. 00:59:09.958 [2024-06-11 03:55:51.047039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.047073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.047374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.047405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.047640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.047672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.047974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.048006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.048314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.048345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.048507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.048539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.048762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.048794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.048951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.048982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.049287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.049319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.049587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.049619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.049842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.049874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.050108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.050125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.050302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.050334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.050561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.050593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.050872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.050904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.051137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.051174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.051415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.051446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.051652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.051684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.051933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.051964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.052246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.052278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.052514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.052546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.052765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.052796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.052937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.052969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.053212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.053244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.053484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.053515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.053802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.053834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.054085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.054103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.054366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.054383] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.054567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.054583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.054776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.054807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.054958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.054990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.055297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.055329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.055601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.055633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.055792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.055809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.055998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.056041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.056193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.056224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.056450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.056481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.056764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.056795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.959 [2024-06-11 03:55:51.057075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.959 [2024-06-11 03:55:51.057117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.959 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.057256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.057273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.057393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.057425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.057709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.057741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.057912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.057943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.058111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.058128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.058248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.058265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.058434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.058450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.058655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.058686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.058983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.059025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.059192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.059224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.059462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.059493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.059774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.059805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.060029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.060062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.060286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.060317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.060544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.060575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.060722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.060753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.060900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.060931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.061118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.061188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.061466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.061535] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.061781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.061815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.061958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.061989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.062224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.062257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.062528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.062559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.062852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.062882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.063097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.063138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.063347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.063379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.063581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.063613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.063754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.063786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.064006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.064049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.064202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.064233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.064394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.960 [2024-06-11 03:55:51.064434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.960 qpair failed and we were unable to recover it. 00:59:09.960 [2024-06-11 03:55:51.064639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.064670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.064889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.064921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.065133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.065150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.065320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.065337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.065573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.065604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.065805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.065836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.066061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.066078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.066313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.066330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.066421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.066437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.066664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.066694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.066917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.066948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.067163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.067195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.067401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.067432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.067760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.067792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.068026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.068042] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.068178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.068196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.068384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.068400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.068560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.068582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.068758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.068775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.069032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.069046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.069133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.069144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.069240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.069252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.069417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.069428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.069553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.069565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.069728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.069758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.069982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.070021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.070389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.070458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.070684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.070718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.070985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.071001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.071246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.071277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.071489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.071520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.071791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.071832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.072022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.072039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.072297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.072330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.072606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.072637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.072782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.072813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.073102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.073119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.073250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.961 [2024-06-11 03:55:51.073266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:09.961 qpair failed and we were unable to recover it. 00:59:09.961 [2024-06-11 03:55:51.073448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.073461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.073642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.073673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.073981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.074021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.074161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.074192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.074412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.074423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.074542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.074574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.074864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.074895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.075056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.075089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.075267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.075278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.075466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.075496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.075699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.075730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.076000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.076049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.076273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.076304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.076536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.076567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.076798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.076829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.077106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.077139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.077330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.077342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.077462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.077492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.077725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.077755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.077959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.077990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.078221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.078253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.078473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.078504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.078814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.078845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.078985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.079027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.079195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.079226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.079450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.079480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.079702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.079733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.079983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.080034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.080201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.080236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.080436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.080467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.080752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.080795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.080972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.080984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.081092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.081104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.081337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.081368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.081596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.081627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.081841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.081871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.082086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.082118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.082268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.082298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.962 [2024-06-11 03:55:51.082597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.962 [2024-06-11 03:55:51.082627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.962 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.082867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.082898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.083056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.083068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.083185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.083198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.083374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.083386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.083552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.083564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.083688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.083699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.083900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.083911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.084024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.084036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.084194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.084206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.084319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.084330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.084527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.084558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.084851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.084882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.085090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.085122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.085277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.085308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.085520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.085551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.085702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.085733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.085888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.085899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.086063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.086075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.086308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.086339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.086607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.086638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.086873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.086904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.087124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.087157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.087456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.087486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.087697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.087729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.087871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.087902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.088114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.088146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.088286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.088316] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.088530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.088561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.088760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.088791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.088948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.088961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.089242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.089254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.089445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.089457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.089661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.089691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.089963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.089993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.090229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.090261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.090464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.090475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.090670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.090701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.963 qpair failed and we were unable to recover it. 00:59:09.963 [2024-06-11 03:55:51.090915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.963 [2024-06-11 03:55:51.090946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.091121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.091153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.091417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.091428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.091601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.091632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.091836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.091867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.092168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.092180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.092402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.092413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.092590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.092601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.092731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.092762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.093000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.093044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.093242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.093280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.093438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.093450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.093606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.093617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.093804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.093836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.094007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.094063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.094267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.094299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.094451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.094462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.094638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.094648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.094767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.094779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.095073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.095106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.095374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.095405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.095625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.095655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.095903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.095934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.096143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.096175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.096386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.096398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.096631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.096663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.096882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.096914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.097144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.097176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.097387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.097418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.097726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.097757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.097984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.098021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.098294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.098325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.098593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.098630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.098849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.098880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.099111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.099144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.099289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.099300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.099540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.099571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.099722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.099753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.964 [2024-06-11 03:55:51.100049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.964 [2024-06-11 03:55:51.100080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.964 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.100240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.100271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.100440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.100471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.100695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.100726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.100971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.101002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.101301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.101332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.101627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.101659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.101942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.101973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.102291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.102323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.102458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.102489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.102670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.102700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.102969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.103001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.103211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.103243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.103530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.103561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.103720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.103751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.103978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.104019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.104307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.104339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.104486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.104517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.104741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.104772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.105038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.105050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.105274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.105285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.105468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.105500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.105663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.105694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.105843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.105874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.106076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.106088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.106197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.106208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.106331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.106362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.106654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.106685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.106887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.106918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.107071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.107103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.107344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.107355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.107595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.107606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.107806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.107836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.965 [2024-06-11 03:55:51.108051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.965 [2024-06-11 03:55:51.108083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.965 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.108244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.108285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.108506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.108517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.108799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.108831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.109057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.109089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.109296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.109326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.109610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.109622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.109738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.109749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.109957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.109988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.110166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.110197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.110440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.110471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.110687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.110718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.111044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.111083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.111284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.111295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.111472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.111483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.111645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.111656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.111817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.111828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.111994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.112034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.112209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.112240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.112390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.112420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.112566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.112597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.112798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.112829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.112969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.112999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.113345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.113356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.113614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.113625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.113796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.113807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.113928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.113939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.114094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.114106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.114331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.114368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.114640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.114670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.114886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.114917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.115058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.115071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.115188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.115199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.115379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.115390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.115566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.115597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.115766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.115798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.116090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.116102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.116194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.116205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.966 [2024-06-11 03:55:51.116465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.966 [2024-06-11 03:55:51.116496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.966 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.116712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.116743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.117012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.117024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.117138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.117164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.117417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.117447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.117657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.117688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.117901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.117932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.118093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.118131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.118243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.118255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.118371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.118400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.118617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.118647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.118925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.118960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.119189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.119200] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.119396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.119408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.119626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.119637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.119801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.119812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.120070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.120101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.120317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.120348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.120517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.120528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.120630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.120642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.120808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.120819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.120985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.121023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.121236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.121267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.121543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.121574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.121736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.121766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.121990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.122029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.122256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.122287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.122560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.122572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.122741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.122753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.122941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.122973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.123203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.123215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.123382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.123413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.123552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.123583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.123817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.123849] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.124032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.124065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.124229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.124261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.124484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.124515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.124664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.124695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.967 [2024-06-11 03:55:51.124998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.967 [2024-06-11 03:55:51.125039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.967 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.125242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.125253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.125433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.125465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.125605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.125636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.125856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.125886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.126107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.126121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.126212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.126223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.126349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.126380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.126605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.126636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.126842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.126878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.127020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.127032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.127257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.127269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.127447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.127479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.127684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.127715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.127924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.127955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.128234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.128246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.128451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.128483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.128628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.128660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.128903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.128934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.129210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.129243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.129455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.129466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.129632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.129663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.129815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.129847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.130068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.130100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.130247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.130278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.130402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.130413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.130531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.130543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.130796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.130820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.131044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.131077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.131297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.131329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.131515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.131527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.131696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.131726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.131950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.131982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.132198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.132229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.132457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.132489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.132701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.132731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.132965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.132996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.133228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.133260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.133425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.968 [2024-06-11 03:55:51.133437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.968 qpair failed and we were unable to recover it. 00:59:09.968 [2024-06-11 03:55:51.133594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.133625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.133896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.133928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.134153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.134183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.134455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.134486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.134633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.134665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.134874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.134905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.135120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.135145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.135354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.135386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.135531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.135563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.135715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.135746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.135990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.136028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.136189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.136220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.136510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.136541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.136703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.136734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.136936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.136967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.137254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.137265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.137440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.137451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.137619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.137650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.137934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.137965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.138181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.138193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.138300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.138312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.138405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.138437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.138654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.138686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.138906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.138937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.139156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.139189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.139425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.139456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.139748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.139779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.140018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.140051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.140324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.140355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.140623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.140654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.140884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.140916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.141141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.141152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.141316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.141327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.141535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.141567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.141834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.141865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.142034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.142065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.142338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.969 [2024-06-11 03:55:51.142382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.969 qpair failed and we were unable to recover it. 00:59:09.969 [2024-06-11 03:55:51.142564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.142575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.142744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.142755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.142961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.142992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.143272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.143304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.143525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.143536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.143718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.143749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.143898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.143929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.144200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.144232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.144534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.144564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.144682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.144695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.144874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.144904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.145174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.145206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.145415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.145446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.145626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.145657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.145869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.145899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.146187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.146198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.146353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.146364] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.146534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.146545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.146752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.146763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.146953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.146984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.147266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.147297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.147568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.147599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.147744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.147776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.147919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.147950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.148255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.148287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.148462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.148473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.148668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.148699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.148851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.148883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.149063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.149095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.149245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.149257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.149359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.149370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.149481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.149511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.149718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.149750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.149955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.149985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.970 qpair failed and we were unable to recover it. 00:59:09.970 [2024-06-11 03:55:51.150216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.970 [2024-06-11 03:55:51.150247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.150385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.150416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.150638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.150669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.150878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.150909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.151122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.151154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.151476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.151507] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.151714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.151745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.151908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.151939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.152254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.152266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.152468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.152479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.152653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.152664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.152782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.152793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.153032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.153064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.153225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.153256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.153423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.153454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.153771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.153812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.154018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.154050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.154341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.154373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.154589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.154620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.154779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.154809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.154974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.155005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.155242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.155253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.155358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.155369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.155613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.155645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.155881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.155912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.156118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.156149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.156292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.156304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.156471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.156501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.156723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.156754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.156899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.156930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.157201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.157214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.157323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.157335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.157507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.157538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.157694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.157725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.157960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.157991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.158204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.158235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.158438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.158474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.158712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.158724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.971 [2024-06-11 03:55:51.158961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.971 [2024-06-11 03:55:51.158972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.971 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.159213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.159225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.159312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.159343] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.159613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.159643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.159951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.159982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.160261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.160272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.160403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.160434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.160640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.160670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.160943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.160974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.161209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.161242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.161483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.161514] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.161635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.161666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.161875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.161906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.162197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.162208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.162383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.162414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.162627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.162658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.162810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.162841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.163064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.163102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.163415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.163450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.163683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.163714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.163877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.163907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.164132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.164164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.164451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.164494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.164712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.164743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.165023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.165054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.165285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.165296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.165470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.165501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.165720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.165751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.166030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.166062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.166287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.166317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.166534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.166565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.166723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.166754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.166942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.166972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.167158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.167202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.167374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.167385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.167629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.167659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.167920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.167951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.168168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.168199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.168400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.972 [2024-06-11 03:55:51.168412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.972 qpair failed and we were unable to recover it. 00:59:09.972 [2024-06-11 03:55:51.168586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.168617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.168845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.168875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.169109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.169141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.169347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.169377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.169668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.169710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.169867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.169898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.170189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.170222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.170372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.170384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.170580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.170611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.170817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.170848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.171051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.171083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.171305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.171316] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.171498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.171529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.171798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.171829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.172064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.172096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.172306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.172317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.172435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.172469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.172675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.172705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.172996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.173040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.173251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.173282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.173554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.173585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.173778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.173809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.173988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.174028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.174266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.174277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.174525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.174536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.174816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.174846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.175000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.175072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.175298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.175309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.175471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.175502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.175654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.175685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.175869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.175899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.176171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.176203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.176334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.176345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.176451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.176462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.176590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.176601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.176758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.176769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.176864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.176875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.177041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.177073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.973 qpair failed and we were unable to recover it. 00:59:09.973 [2024-06-11 03:55:51.177293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.973 [2024-06-11 03:55:51.177324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.177483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.177513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.177723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.177734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.177913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.177944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.178141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.178152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.178325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.178336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.178545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.178556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.178783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.178794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.178991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.179002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.179118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.179130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.179306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.179337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.179492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.179524] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.179730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.179761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.179990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.180046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.180208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.180240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.180526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.180557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.180739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.180770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.181072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.181105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.181324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.181355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.181669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.181699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.181853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.181888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.182114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.182146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.182359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.182370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.182546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.182576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.182860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.182891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.183055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.183087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.183358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.183394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.183653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.183664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.183851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.183882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.184085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.184116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.184291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.184333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.184502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.184513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.184760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.974 [2024-06-11 03:55:51.184771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.974 qpair failed and we were unable to recover it. 00:59:09.974 [2024-06-11 03:55:51.185003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.185044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.185263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.185296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.185462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.185493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.185777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.185808] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.185960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.185991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.186165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.186197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.186429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.186461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.186645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.186675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.186920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.186952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.187140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.187171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.187402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.187434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.187661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.187692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.187906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.187937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.188209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.188240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.188509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.188545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.188758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.188789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.189025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.189057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.189263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.189274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.189457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.189489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.189646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.189677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.189887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.189918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.190122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.190153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.190425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.190456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.190771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.190802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.191051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.191083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.191267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.191279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.191454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.191465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.191696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.191728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.192002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.192048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.192147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.192159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.192428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.192459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.192705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.192736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.192967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.192998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.193213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.193245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.193450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.193481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.193682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.193713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.193989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.194030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.194255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.194267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.975 qpair failed and we were unable to recover it. 00:59:09.975 [2024-06-11 03:55:51.194364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.975 [2024-06-11 03:55:51.194395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.194675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.194706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.194872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.194903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.195111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.195144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.195343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.195355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.195505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.195535] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.195754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.195785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.195942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.195973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.196206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.196238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.196529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.196540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.196795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.196807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.196921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.196932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.197111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.197143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.197292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.197323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.197592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.197623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.197779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.197810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.197959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.197995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.198216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.198248] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.198528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.198558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.198836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.198867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.199079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.199113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.199295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.199307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.199565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.199596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.199829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.199860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.200130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.200169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.200277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.200288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.200519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.200550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.200753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.200785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.200924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.200954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.201176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.201209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.201341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.201352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.201524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.201555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.201706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.201736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.201941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.201971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.202200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.202232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.202548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.202559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.202787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.202798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.203051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.203083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.976 [2024-06-11 03:55:51.203228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.976 [2024-06-11 03:55:51.203239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.976 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.203455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.203486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.203688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.203719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.203940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.203971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.204221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.204253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.204425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.204457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.204743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.204755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.204917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.204928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.205096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.205107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.205331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.205342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.205576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.205607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.205829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.205860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.206033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.206064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.206359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.206390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.206659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.206690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.206932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.206963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.207139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.207171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.207328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.207359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.207580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.207593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.207836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.207867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.208145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.208181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.208307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.208319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.208485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.208521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.208799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.208829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.208993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.209032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.209195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.209207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.209439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.209470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.209743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.209774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.209994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.210034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.210266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.210297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.210572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.210603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.210749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.210780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.211080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.211111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.211266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.211297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.211533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.211563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.211800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.211830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.211981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.212029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.212300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.212332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.212481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.977 [2024-06-11 03:55:51.212512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.977 qpair failed and we were unable to recover it. 00:59:09.977 [2024-06-11 03:55:51.212676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.212687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.212931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.212962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.213102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.213135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.213430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.213461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.213759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.213790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.213951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.213982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.214235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.214246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.214452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.214483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.214774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.214805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.214966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.214997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.215237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.215268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.215486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.215517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.215726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.215757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.216034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.216066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.216311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.216343] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.216545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.216576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.216819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.216850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.217119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.217159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.217315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.217326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.217499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.217512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.217635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.217646] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.217803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.217814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.217969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.217980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.218229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.218261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.218468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.218499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.218726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.218757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.219040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.219072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.219363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.219393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.219689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.219721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.220042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.220074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.220325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.220357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.220634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.220645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.220809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.220837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.221051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.221084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.221308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.221339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.221612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.221623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.221799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.221811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.221975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.222005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.978 qpair failed and we were unable to recover it. 00:59:09.978 [2024-06-11 03:55:51.222244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.978 [2024-06-11 03:55:51.222255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.222390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.222422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.222639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.222670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.222886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.222917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.223207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.223239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.223521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.223552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.223770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.223781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.224001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.224021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.224303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.224334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.224579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.224610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.224823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.224854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.225133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.225166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.225470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.225481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.225656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.225667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.225834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.225865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.226086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.226118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.226383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.226394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.226529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.226560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.226725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.226756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.226995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.227040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.227322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.227353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.227571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.227606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.227836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.227866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.228066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.228097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.228296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.228307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.228480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.228511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.228728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.228759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.228965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.228996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.229212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.229243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.229468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.229498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.229628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.229639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.229804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.229835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.230067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.979 [2024-06-11 03:55:51.230100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.979 qpair failed and we were unable to recover it. 00:59:09.979 [2024-06-11 03:55:51.230377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.230407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.230623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.230654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.230881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.230912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.231124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.231156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.231313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.231344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.231640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.231651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.231954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.231985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.232219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.232251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.232557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.232588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.232804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.232836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.233005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.233043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.233272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.233283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.233487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.233517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.233652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.233681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.233924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.233954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.234139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.234171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.234334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.234366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.234578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.234589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.234780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.234810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.235080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.235112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.235343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.235374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.235574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.235605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.235850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.235881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.236034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.236065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.236223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.236254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.236486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.236516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.236742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.236773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.237005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.237043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.237194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.237231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.237375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.237406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.237625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.237656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.237862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.237894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.238172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.238203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.238408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.238420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.238530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.238559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.238790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.238821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.239051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.239083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.239295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.239325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.980 [2024-06-11 03:55:51.239500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.980 [2024-06-11 03:55:51.239529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.980 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.239668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.239698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.239905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.239935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.240231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.240263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.240496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.240526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.240731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.240762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.241033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.241064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.241289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.241319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.241532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.241563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.241769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.241800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.242041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.242074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.242220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.242251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.242500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.242530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.242801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.242831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.243053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.243085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.243300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.243330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.243486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.243497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.243608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.243619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.243740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.243751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.243856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.243867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.244036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.244048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.244173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.244185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.244296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.244308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.244494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.244525] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.244688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.244718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.244947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.244977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.245199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.245231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.245458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.245489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.245694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.245706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.245806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.245836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.246054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.246092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.246253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.246285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.246509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.246540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.246689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.246720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.246929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.246959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.247132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.247164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.247456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.247487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.247690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.247722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.247958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.981 [2024-06-11 03:55:51.247988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.981 qpair failed and we were unable to recover it. 00:59:09.981 [2024-06-11 03:55:51.248237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.248269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.248478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.248489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.248646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.248657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.248849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.248860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.249065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.249077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.249196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.249207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.249377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.249408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.249636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.249666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.249969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.250000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.250167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.250199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.250412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.250443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.250703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.250715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.250909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.250940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.251185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.251216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.251422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.251433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.251543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.251555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.251662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.251674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.251776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.251788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.251949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.251960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.252185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.252197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.252405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.252436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.252643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.252674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.252897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.252928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.253145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.253177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.253339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.253349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.253518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.253529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.253687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.253698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.253857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.253869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.253952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.253963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.254138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.254149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.254349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.254360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.254466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.254504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.254732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.254763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.254983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.255040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.255196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.255227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.255442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.255472] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.255713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.255744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.255984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.982 [2024-06-11 03:55:51.255995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.982 qpair failed and we were unable to recover it. 00:59:09.982 [2024-06-11 03:55:51.256191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.256203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.256383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.256414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.256639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.256669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.256964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.256995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.257214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.257246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.257447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.257477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.257627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.257638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.257878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.257910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.258234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.258266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.258486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.258498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.258696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.258727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.258957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.258988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.259271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.259302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.259575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.259606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.259844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.259874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.260088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.260119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.260359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.260390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.260526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.260537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.260704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.260715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.260805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.260816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.260921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.260933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.261050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.261082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.261310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.261340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.261551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.261581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.261773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.261784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.261938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.261968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.262194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.262227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.262437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.262467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.262604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.262627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.262876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.262887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.263063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.263096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.263316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.263347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.263573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.263603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.983 qpair failed and we were unable to recover it. 00:59:09.983 [2024-06-11 03:55:51.263821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.983 [2024-06-11 03:55:51.263857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.264130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.264162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.264324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.264355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.264578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.264608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.264795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.264807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.264973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.265021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.265321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.265352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.265628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.265659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.265899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.265930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.266093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.266124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.266392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.266423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.266693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.266725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.266890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.266922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.267140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.267187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.267497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.267528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.267751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.267781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.267932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.267963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.268138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.268170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.268377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.268407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.268627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.268659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.268792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.268823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.268960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.268990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.269216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.269247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.269545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.269576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.269752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.269783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.269955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.269986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.270207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.270238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.270514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.270546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.270757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.270769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.270937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.270968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.271248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.271280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.271417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.271448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.271693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.271724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.272001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.272061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.272268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.272299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.272506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.272517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.272769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.272799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.272947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.272978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.984 [2024-06-11 03:55:51.273279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.984 [2024-06-11 03:55:51.273313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.984 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.273539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.273570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.273808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.273821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.274052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.274085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.274359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.274390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.274683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.274715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.274928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.274959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.275127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.275159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.275381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.275412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.275619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.275630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.275797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.275827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.276043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.276075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.276349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.276388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.276560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.276571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.276686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.276717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.277031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.277063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.277382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.277414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.277557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.277588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.277809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.277840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.278149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.278181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.278322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.278353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.278509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.278540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.278790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.278821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.278988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.279041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.279226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.279258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.279526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.279557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.279704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.279747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.279910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.279922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.280048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.280061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.280232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.280243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.280426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.280458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.280612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.280643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.280849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.280881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.281100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.281132] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.281344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.281376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.281650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.281680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.281833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.281865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.282004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.282058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.282268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.985 [2024-06-11 03:55:51.282300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.985 qpair failed and we were unable to recover it. 00:59:09.985 [2024-06-11 03:55:51.282507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.282538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.282680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.282711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.282846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.282859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.283023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.283059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.283213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.283245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.283453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.283487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.283778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.283790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.283885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.283896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.284105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.284137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.284341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.284372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.284521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.284552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.284762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.284773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.284869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.284880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.285041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.285053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.285301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.285313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.285403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.285451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.285557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.285568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.285841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.285873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.286140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.286172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.286432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.286463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.286679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.286710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.286862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.286893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.287048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.287081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.287287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.287319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.287538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.287569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.287699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.287710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.287896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.287927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.288149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.288180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.288427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.288459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.288750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.288761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.288923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.288934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.289031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.289043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.289136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.289148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.289323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.289354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.289520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.289550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.289699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.289730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.290000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.290041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.290210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.290241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.986 qpair failed and we were unable to recover it. 00:59:09.986 [2024-06-11 03:55:51.290452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.986 [2024-06-11 03:55:51.290483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.290688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.290720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.290906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.290937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.291207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.291240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.291392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.291424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.291728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.291764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.291930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.291962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.292243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.292275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.292388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.292419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.292648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.292680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.292893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.292905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.292993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.293004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.293163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.293175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.293344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.293375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.293545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.293577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.293860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.293891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.294066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.294078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.294259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.294270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.294467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.294498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.294729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.294761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.295051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.295082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.295223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.295254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.295409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.295438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.295591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.295624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.295833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.295845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.296028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.296060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.296203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.296234] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.296453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.296483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.296750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.296761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.296857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.296868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.296974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.296986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.297084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.297097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.297258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.297303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.297508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.297540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.297784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.297822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.298000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.298017] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.298220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.298233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.298341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.298373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.987 [2024-06-11 03:55:51.298579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.987 [2024-06-11 03:55:51.298610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.987 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.298884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.298915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.299062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.299094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.299241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.299272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.299480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.299511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.299716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.299748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.299885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.299915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.300118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.300155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.300322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.300355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.300567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.300598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.300872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.300904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.301058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.301091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.301295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.301326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.301466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.301498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.301633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.301665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.301811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.301843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.302057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.302090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.302352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.302385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.302564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.302575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.302687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.302699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.302875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.302912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.303126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.303159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.303481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.303492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.303597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.303608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.303722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.303733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.303906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.303938] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.304182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.304214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.304483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.304525] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.304688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.304700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.304886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.304897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.305029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.305040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.305147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.305159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.305281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.305312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.988 [2024-06-11 03:55:51.305478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.988 [2024-06-11 03:55:51.305510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.988 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.305653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.305690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.305901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.305913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.306055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.306067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.306229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.306240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.306413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.306424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.306530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.306542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.306770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.306781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.306873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.306884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.307051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.307063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.307174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.307185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.307292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.307304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.307467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.307479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.307566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.307606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.307809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.307840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.308002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.308040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.308250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.308282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.308396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.308427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.308583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.308615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.308766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.308797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.309036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.309068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.309360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.309391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.309549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.309580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.309718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.309749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.309962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.309993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.310221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.310254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.310410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.310421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.310589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.310601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.310791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.310803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.310903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.310914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.311019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.311031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.311199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.311210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.311420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.311452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.311606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.311639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.311780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.311812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.312088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.312120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.312282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.312313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.312454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.312485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.312647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.989 [2024-06-11 03:55:51.312678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.989 qpair failed and we were unable to recover it. 00:59:09.989 [2024-06-11 03:55:51.312896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.312928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.313135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.313167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.313375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.313413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.313633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.313644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.313747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.313758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.313859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.313891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.314118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.314150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.314301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.314333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.314549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.314580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.314724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.314756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.314968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.314999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.315210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.315241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.315397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.315427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.315641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.315672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.315887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.315918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.316119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.316150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.316397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.316429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.316642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.316672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.316840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.316872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.317022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.317054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.317213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.317244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.317457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.317488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.317691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.317722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.317999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.318040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.318198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.318229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.318396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.318427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.318642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.318675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.318811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.318842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.319053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.319087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.319232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.319264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.319602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.319633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.319868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.319898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.320129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.320162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.320365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.320397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.320601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.320631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.320792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.320828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.320925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.320936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.321061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.321073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.990 qpair failed and we were unable to recover it. 00:59:09.990 [2024-06-11 03:55:51.321269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.990 [2024-06-11 03:55:51.321281] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.321439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.321450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.321633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.321645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.321806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.321819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.321932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.321945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.322034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.322062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.322157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.322169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.322261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.322272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.322384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.322416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.322626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.322657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.322862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.322894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.323120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.323151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.323304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.323335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.323536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.323566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.323732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.323763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.323939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.323970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.324136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.324169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.324386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.324417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.324625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.324636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.324825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.324857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.324995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.325037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.325165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.325196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.325416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.325447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.325656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.325687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.325902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.325933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.326142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.326175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.326405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.326436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.326563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.326576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.326765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.326776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.327052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.327084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.327353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.327384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.327660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.327692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.327848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.327859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.327932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.327943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.328047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.328058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.328243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.328254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.328344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.328355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.328471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.328482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.991 [2024-06-11 03:55:51.328569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.991 [2024-06-11 03:55:51.328581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.991 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.328675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.328687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.328844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.328855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.328964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.328994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.329209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.329241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.329444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.329476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.329697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.329734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.329875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.329906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.330090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.330122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.330272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.330303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.330549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.330580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.330790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.330821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.331034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.331066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.331210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.331241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.331482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.331513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.331664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.331696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.331844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.331875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.332086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.332118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.332273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.332304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.332508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.332539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.332686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.332717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.332933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.332964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.333178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.333210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.333358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.333390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.333620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.333651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.333876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.333907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.334119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.334152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.334301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.334333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.334485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.334516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.334721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.334760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.334870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.334882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.334975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.334986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.335160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.335172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.992 qpair failed and we were unable to recover it. 00:59:09.992 [2024-06-11 03:55:51.335414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.992 [2024-06-11 03:55:51.335446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.335602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.335632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.335796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.335827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.336022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.336034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.336226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.336238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.336416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.336428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.336601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.336612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.336731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.336742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.336914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.336925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.337037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.337069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.337264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.337296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.337427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.337439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.337541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.337553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.337653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.337689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.337853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.337884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.338095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.338127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.338278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.338309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.338453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.338484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.338631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.338661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.338815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.338827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.338994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.339006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.339238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.339269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.339489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.339521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.339730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.339761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.339892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.339903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.340013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.340024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.340248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.340259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.340424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.340435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.340539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.340550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.340711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.340722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:09.993 [2024-06-11 03:55:51.340921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:09.993 [2024-06-11 03:55:51.340933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:09.993 qpair failed and we were unable to recover it. 00:59:10.281 [2024-06-11 03:55:51.341107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.281 [2024-06-11 03:55:51.341119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.281 qpair failed and we were unable to recover it. 00:59:10.281 [2024-06-11 03:55:51.341305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.281 [2024-06-11 03:55:51.341317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.281 qpair failed and we were unable to recover it. 00:59:10.281 [2024-06-11 03:55:51.341474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.281 [2024-06-11 03:55:51.341486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.281 qpair failed and we were unable to recover it. 00:59:10.281 [2024-06-11 03:55:51.341645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.281 [2024-06-11 03:55:51.341656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.281 qpair failed and we were unable to recover it. 00:59:10.281 [2024-06-11 03:55:51.341835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.281 [2024-06-11 03:55:51.341848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.281 qpair failed and we were unable to recover it. 00:59:10.281 [2024-06-11 03:55:51.341951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.281 [2024-06-11 03:55:51.341963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.281 qpair failed and we were unable to recover it. 00:59:10.281 [2024-06-11 03:55:51.342085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.281 [2024-06-11 03:55:51.342098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.281 qpair failed and we were unable to recover it. 00:59:10.281 [2024-06-11 03:55:51.342267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.281 [2024-06-11 03:55:51.342278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.342379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.342391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.342520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.342531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.342803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.342815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.342881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.342892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.343067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.343079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.343277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.343289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.343390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.343401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.343568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.343579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.343772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.343784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.343901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.343913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.344077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.344089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.344192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.344203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.344314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.344325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.344416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.344428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.344526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.344539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.344629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.344640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.344743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.344754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.344889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.344903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.345014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.345026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.345187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.345197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.345301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.345312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.345398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.345409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.345510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.345541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.345777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.345809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.345959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.345990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.346151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.346182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.346392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.346423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.346587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.346619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.346752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.346764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.346925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.346937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.347111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.347123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.347223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.347252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.347395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.347427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.347596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.347626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.347769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.347800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.347933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.347945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.348119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.348150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.282 qpair failed and we were unable to recover it. 00:59:10.282 [2024-06-11 03:55:51.348308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.282 [2024-06-11 03:55:51.348339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.348555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.348586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.348797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.348828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.348945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.348976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.349226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.349257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.349462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.349493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.349676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.349686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.349866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.349878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.350003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.350019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.350222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.350252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.350522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.350552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.350689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.350721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.350865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.350876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.351060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.351101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.351280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.351310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.351484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.351497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.351598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.351610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.351706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.351721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.351838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.351850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.351946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.351958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.352189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.352221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.352448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.352480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.352744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.352783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.352974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.352985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.353147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.353169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.353408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.353420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.353521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.353532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.353702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.353734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.353894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.353924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.354136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.354167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.354323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.354355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.354474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.354505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.354711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.354741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.354873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.354885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.355074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.355086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.355268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.355299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.355439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.355468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.355706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.355736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.283 [2024-06-11 03:55:51.355885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.283 [2024-06-11 03:55:51.355914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.283 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.356073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.356105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.356253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.356283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.356553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.356584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.356726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.356756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.356910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.356939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.357095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.357127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.357343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.357375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.357520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.357551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.357711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.357742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.357947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.357958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.358068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.358080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.358222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.358252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.358408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.358440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.358645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.358675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.358823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.358853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.359002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.359024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.359238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.359250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.359421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.359432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.359521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.359534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.359653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.359665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.359787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.359798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.359906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.359919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.360018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.360030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.360209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.360241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.360404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.360435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.360588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.360619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.360871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.360882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.360988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.360999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.361223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.361255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.361478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.361509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.361662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.361674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.361829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.361840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.362102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.362114] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.362220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.362231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.362391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.362403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.362512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.362523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.362690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.362702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.284 [2024-06-11 03:55:51.362815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.284 [2024-06-11 03:55:51.362826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.284 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.363015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.363027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.363156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.363167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.363280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.363291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.363395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.363427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.363634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.363665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.363805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.363835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.363990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.364001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.364087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.364099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.364193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.364205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.364298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.364309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.364408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.364419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.364507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.364518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.364614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.364625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.364783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.364796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.364962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.364991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.365139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.365170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.365323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.365354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.365621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.365652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.365790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.365821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.366103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.366115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.366210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.366223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.366325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.366336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.366440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.366477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.366627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.366657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.366802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.366834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.366973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.367003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.367176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.367206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.367482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.367511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.367706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.367716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.367833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.367845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.368002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.368016] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.368165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.368195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.368400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.368430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.368636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.368667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.368873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.368885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.369149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.369160] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.369264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.369275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.285 [2024-06-11 03:55:51.369457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.285 [2024-06-11 03:55:51.369488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.285 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.369702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.369732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.369941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.369972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.370192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.370225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.370370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.370400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.370550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.370580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.370750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.370761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.370859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.370882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.371054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.371066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.371187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.371221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.371278] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb70e30 (9): Bad file descriptor 00:59:10.286 [2024-06-11 03:55:51.371608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.371679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.371864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.371898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.372147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.372183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.372410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.372443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.372590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.372621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.372828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.372859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.373142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.373174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.373411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.373443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.373657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.373674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.373872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.373903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.374063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.374095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.374238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.374269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.374471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.374502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.374670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.374702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.374922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.374952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.375114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.375147] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.375283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.375313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.375516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.375547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.375703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.375735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.375880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.375910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.376107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.376123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.286 [2024-06-11 03:55:51.376225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.286 [2024-06-11 03:55:51.376241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.286 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.376414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.376430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.376535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.376566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.376719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.376750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.376902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.376932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.377066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.377104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.377275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.377306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.377600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.377631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.377783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.377800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.377921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.377935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.378077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.378089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.378269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.378299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.378465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.378495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.378707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.378738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.378846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.378856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.379055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.379086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.379304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.379334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.379476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.379506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.379729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.379761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.380035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.380067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.380291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.380323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.380533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.380564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.380726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.380757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.380938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.380949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.381109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.381149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.381270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.381301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.381510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.381541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.381693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.381731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.381982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.382034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.382311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.382343] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.382573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.382604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.382902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.382933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.383220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.383252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.383407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.383440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.383611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.383644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.383799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.383829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.384049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.384081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.384239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.384269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.384563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.287 [2024-06-11 03:55:51.384595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.287 qpair failed and we were unable to recover it. 00:59:10.287 [2024-06-11 03:55:51.384800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.384832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.384970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.385002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.385287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.385318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.385547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.385578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.385814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.385826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.385933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.385964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.386109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.386146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.386322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.386353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.386496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.386526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.386694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.386705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.386808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.386819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.386938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.386969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.387130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.387163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.387378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.387410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.387549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.387580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.387725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.387755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.387974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.388006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.388161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.388192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.388423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.388453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.388615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.388626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.388827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.388839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.388972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.389003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.389236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.389268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.389521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.389552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.389706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.389738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.389941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.389952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.390060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.390072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.390250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.390262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.390557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.390588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.390793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.390824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.390990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.391031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.391248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.391279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.391419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.391450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.391677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.391709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.391932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.391963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.392266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.392299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.392456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.392487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.392688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.392699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.288 qpair failed and we were unable to recover it. 00:59:10.288 [2024-06-11 03:55:51.392876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.288 [2024-06-11 03:55:51.392908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.393199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.393231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.393440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.393471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.393632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.393663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.393799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.393810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.393941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.393952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.394209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.394241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.394400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.394430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.394621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.394657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.394868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.394899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.395056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.395088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.395243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.395274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.395429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.395460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.395799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.395831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.395988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.396040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.396150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.396161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.396257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.396269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.396439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.396450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.396702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.396733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.396894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.396926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.397129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.397161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.397301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.397332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.397539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.397571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.397858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.397870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.397973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.397984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.398767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.398790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.398911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.398924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.399106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.399118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.399274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.399285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.400334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.400356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.400468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.400482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.400656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.400668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.400843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.400854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.401025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.401037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.401147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.401158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.401307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.401319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.401444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.401475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.401658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.401689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.289 [2024-06-11 03:55:51.401822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.289 [2024-06-11 03:55:51.401835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.289 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.402002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.402044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.402205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.402237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.402396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.402428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.402577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.402608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.402782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.402814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.403020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.403053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.403258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.403290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.403507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.403539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.403739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.403769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.403973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.404029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.404206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.404238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.404395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.404426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.404564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.404595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.404754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.404785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.406100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.406121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.406375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.406387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.406547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.406559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.406667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.406678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.407278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.407299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.407534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.407546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.407655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.407666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.407872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.407904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.408121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.408154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.408314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.408345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.408559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.408590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.408789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.408821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.409030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.409062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.409277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.409309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.409469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.409499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.409787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.409818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.409959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.409990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.410219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.410250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.290 qpair failed and we were unable to recover it. 00:59:10.290 [2024-06-11 03:55:51.410420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.290 [2024-06-11 03:55:51.410453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.410674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.410706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.410945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.410976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.411189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.411222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.411434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.411466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.411693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.411726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.411884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.411915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.412646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.412667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.412862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.412874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.412986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.413027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.413219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.413250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.413419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.413451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.413586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.413617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.413834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.413865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.414076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.414108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.414362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.414373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.414601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.414613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.414716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.414730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.414840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.414852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.414950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.414962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.415137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.415150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.415256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.415271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.415435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.415456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.415588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.415606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.415757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.415773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.415877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.415890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.416079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.416094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.416253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.416265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.416368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.416379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.416515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.416526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.416684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.416696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.416794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.416806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.416894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.416906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.417086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.417099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.417295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.417306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.417406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.417417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.417516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.417527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.417626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.417637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.291 qpair failed and we were unable to recover it. 00:59:10.291 [2024-06-11 03:55:51.417763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.291 [2024-06-11 03:55:51.417774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.417888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.417899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.418008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.418025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.418186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.418197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.418374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.418385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.418488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.418500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.418675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.418687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.418787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.418799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.418977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.418989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.419177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.419189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.419285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.419297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.419412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.419424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.419536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.419548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.419712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.419724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.419821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.419833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.419944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.419956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.420059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.420071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.420179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.420191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.420355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.420367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.420534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.420547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.420638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.420650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.420754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.420766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.420862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.420874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.420985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.420997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.421171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.421183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.421347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.421358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.421454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.421465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.421625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.421636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.421735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.421747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.421987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.421999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.422176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.422187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.422296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.422308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.422462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.422473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.422566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.422577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.422747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.422758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.422916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.422928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.423083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.423096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.423206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.423219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.292 [2024-06-11 03:55:51.423305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.292 [2024-06-11 03:55:51.423317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.292 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.423505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.423516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.423693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.423706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.423812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.423823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.423906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.423918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.424016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.424028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.424143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.424155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.424322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.424333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.424434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.424445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.424603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.424615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.424777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.424789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.424966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.424977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.425083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.425097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.425280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.425292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.425464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.425475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.425553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.425565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.425725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.425736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.425847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.425859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.426037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.426049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.426154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.426166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.426292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.426304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.426471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.426484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.426643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.426654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.426770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.426781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.426894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.426906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.427079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.427091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.427201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.427213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.427318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.427330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.427421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.427433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.427523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.427534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.427654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.427665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.427774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.427786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.427882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.427893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.428064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.428076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.428257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.428269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.428379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.428391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.428589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.428601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.428713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.428724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.428867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.293 [2024-06-11 03:55:51.428878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.293 qpair failed and we were unable to recover it. 00:59:10.293 [2024-06-11 03:55:51.429072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.429084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.429221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.429233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.429390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.429401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.429493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.429505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.429664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.429676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.429859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.429871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.429975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.429987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.430213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.430225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.430350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.430361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.430474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.430486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.430583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.430595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.430751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.430763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.430871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.430883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.431043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.431055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.431125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.431136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.431242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.431254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.431423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.431435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.431542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.431555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.431713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.431725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.431847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.431860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.431948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.431960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.432039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.432054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.432222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.432237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.432470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.432482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.432617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.432630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.432748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.432761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.432854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.432867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.433024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.433037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.433139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.433151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.433266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.433278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.433438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.433451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.433623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.433636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.433745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.433758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.433925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.433937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.294 qpair failed and we were unable to recover it. 00:59:10.294 [2024-06-11 03:55:51.434028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.294 [2024-06-11 03:55:51.434040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.434207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.434219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.434387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.434400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.434506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.434518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.434693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.434705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.434820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.434833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.435029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.435042] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.435127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.435139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.435309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.435321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.435438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.435451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.435610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.435623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.435796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.435809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.435903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.435915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.435991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.436004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.436268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.436280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.436497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.436535] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.436676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.436695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.436883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.436900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.437070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.437088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.437283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.437306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.437485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.437502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.437606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.437622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.437733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.437749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.437922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.437939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.438053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.438070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.438270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.438286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.438408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.438425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.438525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.438539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.438748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.438762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.438856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.438868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.438983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.295 [2024-06-11 03:55:51.438995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.295 qpair failed and we were unable to recover it. 00:59:10.295 [2024-06-11 03:55:51.439096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.439109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.439225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.439237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.439390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.439401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.439591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.439604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.439776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.439807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.440050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.440083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.440302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.440333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.440553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.440584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.440797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.440828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.441073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.441109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.441361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.441373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.441463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.441475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.441588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.441601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.441789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.441800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.441987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.441998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.442107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.442120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.442232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.442244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.442360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.442372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.442638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.442649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.442823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.442835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.443079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.443092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.443265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.443277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.443451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.443464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.443635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.443647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.443856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.443893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.444026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.444047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.444270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.444288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.444483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.444500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.444686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.444713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.444904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.444922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.445107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.445122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.445230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.445242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.445413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.445425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.445652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.445663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.445768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.445780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.445954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.445965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.296 [2024-06-11 03:55:51.446129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.296 [2024-06-11 03:55:51.446143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.296 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.446251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.446263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.446383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.446395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.446514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.446526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.446684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.446695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.446785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.446796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.446887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.446898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.447077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.447090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.447207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.447219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.447325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.447338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.447438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.447450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.447544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.447556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.447658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.447670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.447782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.447793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.447862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.447873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.448069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.448082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.448276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.448289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.448374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.448398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.448496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.448509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.448621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.448634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.448742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.448753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.448883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.448894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.449092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.449106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.449207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.449220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.449314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.449327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.449603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.449616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.449790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.449801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.450001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.450019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.450117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.450131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.450295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.450308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.450409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.450422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.450586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.450598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.450800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.450811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.450885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.450897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.451074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.451088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.451197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.451209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.297 qpair failed and we were unable to recover it. 00:59:10.297 [2024-06-11 03:55:51.451455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.297 [2024-06-11 03:55:51.451468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.451699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.451712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.451824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.451836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.452034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.452048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.452145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.452157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.452311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.452324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.452487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.452500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.452620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.452632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.452794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.452806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.452967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.452980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.453162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.453186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.453311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.453324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.453499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.453512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.453736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.453749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.453858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.453871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.453975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.453988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.454154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.454166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.454328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.454342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.454508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.454521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.454615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.454628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.454727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.454741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.454899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.454913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.455082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.455095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.455194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.455207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.455322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.455335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.455444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.455457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.455620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.455633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.455803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.455817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.455999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.456016] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.456162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.456176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.456273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.456285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.456463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.456475] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.456545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.456559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.456656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.456668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.456855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.456868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.457031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.457043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.457163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.457175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.457276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.298 [2024-06-11 03:55:51.457288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.298 qpair failed and we were unable to recover it. 00:59:10.298 [2024-06-11 03:55:51.457353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.457365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.457473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.457485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.457714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.457726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.457958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.457971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.458130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.458143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.458417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.458429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.458605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.458617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.458776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.458788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.458995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.459008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.459168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.459181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.459303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.459316] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.459487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.459500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.459618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.459630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.459879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.459893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.459969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.459982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.460143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.460156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.460322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.460335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.460448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.460460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.460619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.460631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.460836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.460849] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.461019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.461031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.461196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.461209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.461380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.461393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.461571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.461584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.461685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.461698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.461895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.461907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.462071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.462084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.462261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.462274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.462457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.462469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.462583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.462597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.462747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.462760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.462871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.462884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.463005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.463022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.463207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.463220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.463445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.299 [2024-06-11 03:55:51.463460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.299 qpair failed and we were unable to recover it. 00:59:10.299 [2024-06-11 03:55:51.463569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.463582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.463782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.463795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.463929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.463941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.464052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.464065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.464298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.464311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.464524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.464536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.464700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.464713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.464879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.464893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.465002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.465021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.465186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.465199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.465318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.465331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.465433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.465446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.465559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.465572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.465676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.465689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.465860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.465873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.466053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.466066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.466252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.466265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.466429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.466441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.466621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.466634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.466740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.466753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.466875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.466888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.467055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.467069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.467144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.467158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.467321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.467334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.467564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.467577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.467728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.467741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.467977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.467991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.468098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.468112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.468278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.300 [2024-06-11 03:55:51.468291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.300 qpair failed and we were unable to recover it. 00:59:10.300 [2024-06-11 03:55:51.468394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.468407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.468503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.468516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.468687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.468700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.468811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.468824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.468923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.468937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.469028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.469041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.469137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.469149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.469276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.469288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.469380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.469392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.469622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.469635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.469735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.469749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.469918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.469931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.470081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.470094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.470260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.470273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.470372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.470385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.470568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.470582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.470682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.470695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.470854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.470867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.471120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.471133] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.471272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.471285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.471457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.471470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.471642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.471655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.471871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.471884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.471983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.471997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.472173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.472186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.472382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.472394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.472569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.472582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.472780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.472794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.472900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.472913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.473153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.473166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.473289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.473303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.473560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.473573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.473750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.473763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.473866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.473878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.474005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.474023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.474141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.474154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.301 qpair failed and we were unable to recover it. 00:59:10.301 [2024-06-11 03:55:51.474384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.301 [2024-06-11 03:55:51.474398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.474516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.474529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.474698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.474711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.474874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.474887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.474994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.475007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.475222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.475236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.475400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.475414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.475577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.475590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.475731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.475744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.475839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.475851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.476036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.476049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.476300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.476312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.476431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.476444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.476652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.476666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.476823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.476840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.477005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.477024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.477139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.477152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.477265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.477278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.477438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.477451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.477561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.477575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.477743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.477756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.478021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.478034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.478286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.478299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.478411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.478425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.478628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.478641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.478884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.478898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.478993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.479005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.479172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.479185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.479298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.479310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.479414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.479427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.479655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.479669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.479842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.479856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.479960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.479973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.480158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.480172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.480293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.480307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.480532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.480546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.480623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.480636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.480808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.480821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.302 qpair failed and we were unable to recover it. 00:59:10.302 [2024-06-11 03:55:51.481021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.302 [2024-06-11 03:55:51.481035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.481127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.481140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.481238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.481251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.481433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.481447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.481568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.481580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.481834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.481846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.482032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.482046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.482209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.482222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.482323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.482337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.482427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.482440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.482669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.482682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.482801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.482814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.482931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.482944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.483163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.483177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.483344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.483357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.483470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.483483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.483654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.483671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.483901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.483915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.484033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.484047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.484235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.484248] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.484346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.484359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.484525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.484538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.484699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.484712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.484939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.484952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.485130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.485143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.485328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.485341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.485461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.485474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.485642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.485655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.485884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.485897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.486029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.486043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.486158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.486172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.486338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.486351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.486459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.486472] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.486635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.486649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.486745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.486758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.486826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.486839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.486958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.486971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.487138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.487152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.303 [2024-06-11 03:55:51.487272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.303 [2024-06-11 03:55:51.487285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.303 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.487486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.487500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.487601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.487614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.487792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.487805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.487966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.487978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.488157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.488170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.488407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.488420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.488529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.488543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.488715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.488729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.488899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.488913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.489104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.489117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.489292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.489305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.489429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.489443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.489564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.489577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.489699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.489712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.489885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.489897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.490007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.490030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.490138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.490151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.490324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.490339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.490507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.490520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.490617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.490630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.490709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.490729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.490843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.490857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.490979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.490992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.491162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.491175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.491352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.491365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.491532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.491544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.491693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.491706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.491882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.491895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.492126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.492139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.492391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.492404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.492583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.492596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.492706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.492719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.492938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.492951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.493113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.493127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.493250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.493263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.493441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.493454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.493559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.493572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.493751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.304 [2024-06-11 03:55:51.493764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.304 qpair failed and we were unable to recover it. 00:59:10.304 [2024-06-11 03:55:51.493862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.493875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.494048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.494061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.494235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.494249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.494475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.494488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.494648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.494661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.494759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.494772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.495004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.495023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.495199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.495214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.495375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.495388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.495631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.495644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.495842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.495855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.496047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.496060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.496226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.496239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.496358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.496371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.496471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.496484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.496596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.496608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.496778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.496791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.496975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.496988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.497101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.497115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.497213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.497228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.497349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.497362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.497594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.497607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.497714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.497728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.497961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.497974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.498088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.498102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.498271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.498284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.498481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.498495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.498601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.498614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.498721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.498735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.498848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.498861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.499118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.499131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.305 qpair failed and we were unable to recover it. 00:59:10.305 [2024-06-11 03:55:51.499310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.305 [2024-06-11 03:55:51.499323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.499602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.499615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.499803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.499816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.500022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.500035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.500186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.500199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.500365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.500378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.500558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.500571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.500737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.500750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.500858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.500871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.500979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.500992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.501145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.501158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.501338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.501351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.501479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.501492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.501654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.501667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.501833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.501846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.502051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.502065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.502292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.502306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.502475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.502488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.502652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.502665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.502897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.502910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.503019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.503033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.503201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.503215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.503450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.503462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.503634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.503647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.503728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.503741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.503948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.503962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.504141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.504154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.504382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.504395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.504623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.504638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.504808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.504821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.504995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.505014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.505114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.505127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.505376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.505389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.505554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.505567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.505744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.505757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.505874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.505887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.506089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.506102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.506207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.506220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.306 [2024-06-11 03:55:51.506400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.306 [2024-06-11 03:55:51.506412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.306 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.506661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.506674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.506852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.506865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.506981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.506994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.507174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.507188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.507317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.507330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.507412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.507425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.507556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.507569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.507721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.507734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.507840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.507854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.507963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.507976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.508094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.508108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.508224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.508238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.508343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.508357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.508517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.508530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.508662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.508675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.508860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.508873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.508987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.509007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.509140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.509157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.509338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.509354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.509540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.509556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.509724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.509740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.509872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.509888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.510025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.510040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.510168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.510181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.510263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.510275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.510378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.510391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.510591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.510605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.510717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.510730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.510910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.510923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.511021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.511038] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.511164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.511177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.511298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.511311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.511445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.511458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.511578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.511591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.511701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.511714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.511809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.511822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.511932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.511945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.512065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.307 [2024-06-11 03:55:51.512079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.307 qpair failed and we were unable to recover it. 00:59:10.307 [2024-06-11 03:55:51.512235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.512248] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.512362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.512374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.512473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.512486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.512613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.512626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.512833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.512845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.513034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.513046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.513159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.513171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.513291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.513304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.513410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.513423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.513515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.513527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.513631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.513644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.513805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.513817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.514047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.514059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.514184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.514197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.514399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.514412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.514525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.514537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.514775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.514788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.514907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.514920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.515084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.515097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.515257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.515270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.515456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.515469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.515630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.515642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.515824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.515836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.516029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.516044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.516149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.516163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.516357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.516369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.516549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.516561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.516716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.516729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.516883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.516895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.517147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.517159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.517388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.517400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.517581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.517595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.517771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.517783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.517951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.517962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.518091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.518103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.518197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.518209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.518326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.518339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.518566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.308 [2024-06-11 03:55:51.518578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.308 qpair failed and we were unable to recover it. 00:59:10.308 [2024-06-11 03:55:51.518685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.518697] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.518861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.518873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.518964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.518976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.519069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.519081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.519181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.519193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.519305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.519318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.519416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.519429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.519554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.519567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.519665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.519679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.519902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.519915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.520128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.520142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.520312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.520324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.520518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.520531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.520669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.520681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.520843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.520855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.521061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.521073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.521172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.521184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.521288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.521300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.521405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.521417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.521534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.521547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.521646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.521658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.521754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.521767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.521933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.521946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.522125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.522138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.522368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.522381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.522475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.522487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.522618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.522631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.522879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.522892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.523157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.523170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.523422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.523434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.523545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.523558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.523652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.523664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.309 [2024-06-11 03:55:51.523828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.309 [2024-06-11 03:55:51.523840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.309 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.523969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.523983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.524099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.524112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.524251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.524263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.524441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.524454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.524619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.524632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.524815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.524828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.524958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.524970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.525139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.525152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.525322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.525334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.525496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.525508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.525679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.525692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.525798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.525810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.525965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.525978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.526087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.526100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.526292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.526323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.526595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.526626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.526763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.526793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.526943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.526974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.527185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.527203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.527389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.527405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.527507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.527523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.527725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.527741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.527921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.527937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.528176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.528193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.528370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.528386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.528626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.528642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.528751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.528767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.528867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.528883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.529057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.529071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.529176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.529199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.529454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.529466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.529558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.529570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.529745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.529758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.529951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.529964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.530201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.530213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.530391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.530403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.530498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.530511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.530608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.310 [2024-06-11 03:55:51.530620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.310 qpair failed and we were unable to recover it. 00:59:10.310 [2024-06-11 03:55:51.530730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.530743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.530865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.530877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.531050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.531063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.531225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.531238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.531336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.531348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.531417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.531430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.531618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.531631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.531813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.531825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.531937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.531949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.532199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.532213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.532440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.532453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.532617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.532629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.532722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.532734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.532906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.532919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.533024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.533036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.533132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.533144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.533242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.533255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.533361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.533374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.533562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.533575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.533799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.533811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.533978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.533991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.534096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.534109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.534321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.534333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.534496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.534508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.534690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.534702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.534821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.534833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.534947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.534960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.535137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.535150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.535321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.535333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.535509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.535524] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.535686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.535698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.535870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.535882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.536087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.536100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.536265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.536278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.536454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.536467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.536641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.536653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.536755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.536767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.536862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.311 [2024-06-11 03:55:51.536874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.311 qpair failed and we were unable to recover it. 00:59:10.311 [2024-06-11 03:55:51.537127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.537140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.537367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.537379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.537541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.537554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.537654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.537666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.537919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.537932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.538050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.538063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.538168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.538180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.538352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.538364] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.538466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.538478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.538577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.538589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.538759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.538771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.538873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.538885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.539062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.539075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.539247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.539259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.539351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.539364] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.539535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.539548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.539708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.539721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.539945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.539957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.540143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.540156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.540251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.540263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.540422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.540434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.540607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.540620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.540791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.540803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.540976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.540988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.541151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.541164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.541266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.541278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.541504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.541516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.541622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.541635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.541904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.541916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.542143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.542156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.542283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.542296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.542466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.542481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.542712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.542725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.542949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.542962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.543068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.543080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.543243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.543256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.543430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.543443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.543665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.543677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.312 [2024-06-11 03:55:51.543746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.312 [2024-06-11 03:55:51.543758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.312 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.543934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.543947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.544102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.544116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.544284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.544296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.544469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.544481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.544663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.544675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.544797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.544809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.545042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.545055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.545289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.545301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.545421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.545433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.545593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.545607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.545862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.545874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.546055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.546068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.546178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.546190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.546302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.546315] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.546513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.546525] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.546707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.546720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.546842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.546854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.547034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.547048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.547169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.547180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.547382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.547395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.547567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.547579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.547677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.547689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.547791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.547804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.547972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.547984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.548244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.548257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.548420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.548432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.548540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.548552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.548714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.548726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.548951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.548964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.549094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.549108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.549235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.549248] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.549352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.549365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.549599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.549614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.549796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.549809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.549987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.550000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.550141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.550155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.550275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.550288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.313 [2024-06-11 03:55:51.550368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.313 [2024-06-11 03:55:51.550380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.313 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.550542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.550554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.550755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.550768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.550945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.550957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.551120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.551133] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.551254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.551267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.551442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.551455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.551549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.551562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.551738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.551751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.551937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.551950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.552143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.552156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.552253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.552265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.552517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.552530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.552780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.552792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.552951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.552964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.553138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.553152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.553257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.553270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.553443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.553457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.553582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.553595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.553770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.553783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.553936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.553949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.554125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.554138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.554319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.554332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.554503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.554516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.554704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.554717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.554896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.554909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.555134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.555148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.555342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.555355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.555478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.555491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.555664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.555677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.555835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.555848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.556035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.556048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.556220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.556233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.314 qpair failed and we were unable to recover it. 00:59:10.314 [2024-06-11 03:55:51.556408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.314 [2024-06-11 03:55:51.556421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.556599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.556612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.556718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.556733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.556986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.556999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.557194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.557208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.557388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.557401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.557635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.557649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.557823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.557837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.558066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.558080] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.558202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.558215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.558340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.558353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.558549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.558562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.558788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.558800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.558902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.558915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.559096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.559109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.559195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.559208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.559474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.559487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.559657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.559670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.559844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.559858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.559969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.559982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.560159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.560172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.560401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.560414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.560678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.560691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.560945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.560958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.561069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.561082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.561258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.561271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.561384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.561397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.561595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.561607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.561701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.561715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.561889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.561902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.562076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.562090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.562254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.562267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.562436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.562449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.562572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.562585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.562759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.562772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.562896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.562909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.563035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.563049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.563227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.563241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.315 [2024-06-11 03:55:51.563365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.315 [2024-06-11 03:55:51.563378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.315 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.563556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.563569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.563676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.563689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.563788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.563802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.563999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.564025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.564154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.564167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.564285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.564298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.564397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.564410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.564609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.564622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.564731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.564744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.564862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.564874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.565038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.565052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.565213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.565226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.565396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.565409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.565532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.565545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.565635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.565648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.565808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.565821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.566005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.566021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.566133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.566146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.566307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.566320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.566543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.566556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.566805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.566818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.566926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.566939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.567121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.567135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.567379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.567391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.567498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.567511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.567614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.567626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.567817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.567829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.567931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.567945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.568174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.568187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.568351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.568363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.568529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.568542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.568639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.568652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.568761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.568775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.568960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.568974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.569136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.569149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.569283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.569296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.569383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.569396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.569485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.316 [2024-06-11 03:55:51.569498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.316 qpair failed and we were unable to recover it. 00:59:10.316 [2024-06-11 03:55:51.569691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.569704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.569822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.569836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.569985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.569998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.570207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.570221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.570392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.570405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.570491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.570510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.570614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.570627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.570874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.570887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.570967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.570986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.571119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.571132] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.571389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.571403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.571606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.571619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.571724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.571737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.571925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.571938] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.572111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.572124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.572287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.572300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.572402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.572415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.572590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.572603] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.572776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.572790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.572905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.572919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.573036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.573049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.573283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.573296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.573458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.573471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.573569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.573582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.573738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.573751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.573927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.573939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.574045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.574058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.574150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.574163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.574398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.574410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.574583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.574596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.574758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.574771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.574999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.575015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.575129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.575142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.575251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.575263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.575464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.575477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.575656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.575669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.575758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.575771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.575947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.575961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.317 [2024-06-11 03:55:51.576072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.317 [2024-06-11 03:55:51.576086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.317 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.576188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.576201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.576389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.576402] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.576573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.576586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.576762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.576776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.576884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.576897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.577080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.577093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.577261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.577277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.577391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.577404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.577585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.577598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.577708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.577721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.577970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.577982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.578158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.578171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.578269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.578282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.578445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.578458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.578655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.578667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.578914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.578928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.579110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.579123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.579285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.579298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.579410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.579423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.579541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.579555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.579678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.579691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.579855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.579869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.579957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.579970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.580142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.580156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.580337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.580350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.580449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.580461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.580689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.580702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.580812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.580825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.581027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.581040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.581166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.581179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.581382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.581396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.581521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.581534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.318 qpair failed and we were unable to recover it. 00:59:10.318 [2024-06-11 03:55:51.581699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.318 [2024-06-11 03:55:51.581711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.581992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.582036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.582235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.582254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.582482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.582498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.582632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.582648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.582855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.582871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.583129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.583146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.583276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.583293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.583529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.583546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.583784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.583800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.583978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.583993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.584181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.584198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.584435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.584452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.584625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.584641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.584812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.584828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.585092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.585109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.585394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.585411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.585589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.585605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.585774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.585791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.586027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.586044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.586302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.586319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.586527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.586543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.586723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.586739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.586999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.587020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.587151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.587168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.587406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.587422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.587605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.587623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.587802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.587819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.588031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.588053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.588286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.588302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.588489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.588506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.588695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.588712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.588868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.588885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.589061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.589078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.589340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.589356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.589559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.589575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.589813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.589829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.589936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.589952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.590120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.319 [2024-06-11 03:55:51.590137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.319 qpair failed and we were unable to recover it. 00:59:10.319 [2024-06-11 03:55:51.590309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.590326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.590508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.590525] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.590700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.590716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.590849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.590866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.591037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.591054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.591237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.591254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.591440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.591456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.591644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.591660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.591775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.591791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.591903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.591920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.592085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.592101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.592339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.592355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.592473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.592490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.592699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.592716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.592906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.592922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.593038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.593056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.593223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.593243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.593478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.593495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.593682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.593699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.593885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.593901] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.594148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.594165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.594373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.594389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.594579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.594595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.594728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.594745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.594983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.595000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.595202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.595223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.595361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.595378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.595588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.595605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.595842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.595859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.596027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.596044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.596227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.596248] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.596360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.596376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.596531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.596544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.596651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.596665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.596836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.596849] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.596972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.596985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.597169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.597183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.597352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.597365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.320 [2024-06-11 03:55:51.597490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.320 [2024-06-11 03:55:51.597503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.320 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.597762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.597776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.597949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.597962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.598062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.598076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.598247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.598260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.598372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.598387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.598497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.598510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.598621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.598634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.598794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.598806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.598911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.598924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.599038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.599052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.599237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.599250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.599363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.599376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.599452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.599465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.599697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.599710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.599827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.599841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.599945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.599959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.600064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.600077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.600217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.600230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.600389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.600402] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.600572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.600584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.600765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.600778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.600939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.600952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.601126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.601139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.601247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.601261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.601366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.601379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.601547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.601560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.601671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.601685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.601854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.601868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.601990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.602003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.602190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.602203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.602370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.602383] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.602560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.602579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.602773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.602789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.602976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.602993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.603121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.603136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.603313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.603326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.603463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.603476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.603627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.321 [2024-06-11 03:55:51.603640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.321 qpair failed and we were unable to recover it. 00:59:10.321 [2024-06-11 03:55:51.603761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.603774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.603890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.603904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.604074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.604088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.604215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.604228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.604329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.604341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.604511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.604524] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.604638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.604653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.604769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.604783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.604946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.604959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.605218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.605232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.605339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.605353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.605470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.605482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.605604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.605617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.605728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.605740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.605852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.605865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.605972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.605986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.606108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.606122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.606377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.606390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.606496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.606508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.606690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.606703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.606802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.606815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.606975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.606988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.607218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.607231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.607345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.607358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.607543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.607557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.607725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.607738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.607905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.607918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.608154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.608168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.608299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.608311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.608471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.608484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.608607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.608621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.608780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.608794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.608967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.608980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.609144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.609158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.609321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.609334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.609441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.609454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.609564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.609577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.609767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.609779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.322 [2024-06-11 03:55:51.610051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.322 [2024-06-11 03:55:51.610064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.322 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.610183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.610197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.610366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.610380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.610542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.610555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.610679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.610691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.610858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.610871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.611120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.611133] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.611264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.611277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.611451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.611467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.611627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.611640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.611839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.611853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.612024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.612038] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.612216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.612229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.612421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.612434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.612662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.612675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.612838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.612851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.613032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.613046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.613118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.613131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.613231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.613244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.613490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.613503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.613614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.613627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.613764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.613777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.614021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.614034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.614146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.614160] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.614331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.614344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.614449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.614462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.614644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.614657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.614889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.614902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.615030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.615043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.615219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.615231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.615404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.615417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.615541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.615554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.615747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.615760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.323 [2024-06-11 03:55:51.615873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.323 [2024-06-11 03:55:51.615887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.323 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.616116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.616130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.616364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.616378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.616447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.616459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.616566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.616579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.616682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.616694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.616878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.616891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.617062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.617075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.617255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.617268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.617434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.617447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.617558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.617571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.617735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.617748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.617838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.617851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.618021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.618034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.618205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.618218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.618392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.618407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.618654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.618666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.618845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.618857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.619067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.619081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.619174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.619186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.619421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.619435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.619530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.619544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.619710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.619723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.619835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.619848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.620086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.620100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.620199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.620212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.620375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.620388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.620506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.620518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.620641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.620653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.620776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.620789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.620865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.620877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.621074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.621087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.621275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.621288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.621466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.621478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.621586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.621598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.621851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.621864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.621982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.621995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.324 qpair failed and we were unable to recover it. 00:59:10.324 [2024-06-11 03:55:51.622245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.324 [2024-06-11 03:55:51.622257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.622489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.622501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.622592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.622605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.622705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.622719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.622828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.622841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.623003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.623022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.623119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.623132] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.623292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.623305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.623443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.623455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.623643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.623656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.623922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.623935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.624116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.624129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.624246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.624259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.624369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.624382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.624636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.624650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.624813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.624826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.625022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.625036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.625152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.625165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.625339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.625352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.625600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.625613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.625739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.625753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.625874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.625887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.625990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.626003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.626171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.626185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.626360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.626372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.626601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.626613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.626810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.626824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.626928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.626942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.627145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.627158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.627335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.627349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.627523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.627536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.627635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.627648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.627815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.627828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.627992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.628005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.628121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.628134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.628361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.628374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.628545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.628558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.628745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.325 [2024-06-11 03:55:51.628758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.325 qpair failed and we were unable to recover it. 00:59:10.325 [2024-06-11 03:55:51.628939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.628953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.629061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.629074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.629182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.629196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.629397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.629410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.629518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.629532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.629643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.629656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.629820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.629833] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.629939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.629956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.630130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.630144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.630272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.630284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.630379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.630392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.630646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.630658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.630855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.630868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.630971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.630984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.631204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.631217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.631402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.631415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.631651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.631664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.631789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.631802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.632044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.632057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.632312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.632324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.632496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.632509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.632704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.632716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.632883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.632895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.633019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.633033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.633223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.633236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.633399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.633411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.633653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.633665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.633791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.633804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.633914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.633926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.634042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.634055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.634180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.634193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.634304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.634316] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.634519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.634532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.634708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.634721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.634966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.634980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.635144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.635158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.635328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.635341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.635523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.635537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.635660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.635673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.635778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.635792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.635890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.635903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.326 [2024-06-11 03:55:51.636077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.326 [2024-06-11 03:55:51.636090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.326 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.636263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.636276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.636435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.636447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.636559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.636572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.636771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.636784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.636913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.636927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.637022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.637039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.637219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.637233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.637327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.637340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.637528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.637542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.637655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.637668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.637782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.637796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.638014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.638029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.638205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.638218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.638326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.638340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.638469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.638483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.638665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.638679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.638801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.638814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.638928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.638942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.639056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.639070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.639234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.639247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.639420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.639433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.639608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.639621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.639739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.639752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.639942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.639956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.640067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.640081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.640199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.640213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.640390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.640403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.640572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.640586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.640709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.640722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.640832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.640846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.640951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.640964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.641062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.641075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.641181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.641195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.641332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.641346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.641454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.327 [2024-06-11 03:55:51.641468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.327 qpair failed and we were unable to recover it. 00:59:10.327 [2024-06-11 03:55:51.641562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.641576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.641805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.641818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.641926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.641939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.642023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.642037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.642148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.642162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.642360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.642374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.642545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.642558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.642753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.642767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.642936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.642950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.643097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.643110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.643213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.643229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.643301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.643314] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.643493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.643506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.643598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.643612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.643775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.643788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.643969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.643983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.644168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.644181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.328 [2024-06-11 03:55:51.644381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.328 [2024-06-11 03:55:51.644394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.328 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.644580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.644593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.644769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.644782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.644954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.644968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.645165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.645178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.645367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.645380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.645506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.645519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.645703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.645716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.645836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.645849] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.646047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.646061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.646239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.646251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.646365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.646378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.646484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.646497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.646605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.646618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.646787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.646800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.646913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.646925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.647086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.647100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.647325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.647339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.647450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.647463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.647621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.647634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.647819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.647832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.647949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.647962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.648079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.648093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.648269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.648282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.648379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.648392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.648501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.648515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.648677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.648690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.648855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.648868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.649032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.649046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.649146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.649159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.649270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.649283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.649462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.649476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.649706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.649719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.649832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.649848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.649966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.329 [2024-06-11 03:55:51.649979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.329 qpair failed and we were unable to recover it. 00:59:10.329 [2024-06-11 03:55:51.650111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.650123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.650251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.650264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.650447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.650460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.650725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.650738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.650926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.650940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.651183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.651196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.651310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.651322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.651548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.651560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.651803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.651816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.652068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.652081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.652198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.652211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.652330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.652342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.652457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.652471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.652646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.652659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.652834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.652847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.653098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.653111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.653341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.653355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.653484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.653497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.653681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.653693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.653786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.653799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.654046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.654060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.654164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.654177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.654402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.654416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.654576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.654589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.654767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.654780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.654892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.654905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.655100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.655114] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.655232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.655245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.655409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.655422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.655525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.655538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.655697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.655710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.655803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.655816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.655974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.655987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.656200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.656214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.656409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.330 [2024-06-11 03:55:51.656422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.330 qpair failed and we were unable to recover it. 00:59:10.330 [2024-06-11 03:55:51.656535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.331 [2024-06-11 03:55:51.656548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.331 qpair failed and we were unable to recover it. 00:59:10.331 [2024-06-11 03:55:51.656742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.331 [2024-06-11 03:55:51.656755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.331 qpair failed and we were unable to recover it. 00:59:10.331 [2024-06-11 03:55:51.656873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.331 [2024-06-11 03:55:51.656886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.331 qpair failed and we were unable to recover it. 00:59:10.331 [2024-06-11 03:55:51.657146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.331 [2024-06-11 03:55:51.657161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.331 qpair failed and we were unable to recover it. 00:59:10.331 [2024-06-11 03:55:51.657264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.331 [2024-06-11 03:55:51.657277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.331 qpair failed and we were unable to recover it. 00:59:10.614 [2024-06-11 03:55:51.657452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.614 [2024-06-11 03:55:51.657465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.614 qpair failed and we were unable to recover it. 00:59:10.614 [2024-06-11 03:55:51.657631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.657644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.657766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.657779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.657953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.657966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.658062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.658076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.658193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.658205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.658440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.658453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.658569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.658582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.658673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.658686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.658806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.658820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.658996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.659012] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.659115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.659129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.659244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.659258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.659393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.659406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.659592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.659605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.659769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.659782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.659877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.659890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.660053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.660066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.660197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.660211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.660461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.660473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.660682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.660696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.660868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.660880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.661062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.661075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.661331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.661344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.661512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.661527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.661638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.661651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.661818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.661831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.662064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.662077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.662262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.662279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.662386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.662399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.662539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.662570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.662726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.662758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.663060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.663100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.663316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.663347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.663575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.663607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.663791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.663802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.663989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.615 [2024-06-11 03:55:51.664001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.615 qpair failed and we were unable to recover it. 00:59:10.615 [2024-06-11 03:55:51.664174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.664189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.664435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.664454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.664600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.664614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.664752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.664763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.664994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.665005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.665260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.665272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.665444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.665456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.665629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.665651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.665779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.665790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.665970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.665982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.666179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.666193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.666422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.666445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.666576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.666597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.666803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.666821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.667022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.667058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.667190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.667206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.667337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.667351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.667514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.667526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.667732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.667743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.667932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.667944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.668059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.668074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.668250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.668262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.668385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.668398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.668572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.668584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.668699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.668710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.668803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.668814] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.668983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.668994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.669182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.669195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.669323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.669335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.669442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.669454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.669552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.669565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.669730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.669742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.669849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.669861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.669966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.669978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.670073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.670086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.616 qpair failed and we were unable to recover it. 00:59:10.616 [2024-06-11 03:55:51.670199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.616 [2024-06-11 03:55:51.670212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.670409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.670421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.670528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.670540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.670682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.670693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.670803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.670816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.670922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.670933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.671056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.671070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.671251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.671263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.671420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.671433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.671543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.671554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.671715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.671727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.671879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.671891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.672000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.672014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.672107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.672119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.672287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.672300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.672422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.672435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.672618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.672643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.672920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.672932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.673055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.673086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.673192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.673206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.673408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.673421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.673588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.673600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.673803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.673816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.674073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.674086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.674196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.674209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.674327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.674359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.674511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.674542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.674711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.674743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.674883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.674914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.675075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.675107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.675310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.675341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.675475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.675506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.675722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.675754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.675941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.676029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.676352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.676388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.676620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.676651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.676794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.617 [2024-06-11 03:55:51.676826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.617 qpair failed and we were unable to recover it. 00:59:10.617 [2024-06-11 03:55:51.677094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.677126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.677276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.677307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.677552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.677568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.677672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.677689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.677881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.677912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.678068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.678100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.678254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.678286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.678448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.678479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.678699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.678730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.679002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.679053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.679200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.679230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.679431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.679462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.679668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.679684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.679900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.679931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.680207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.680239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.680512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.680544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.680708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.680739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.680945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.680975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.681154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.681187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.681419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.681450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.681730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.681761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.681908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.681939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.682160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.682191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.682497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.682537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.682762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.682793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.683033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.683065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.683239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.683271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.683503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.683534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.683813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.683844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.683980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.684019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.684240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.684271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.684428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.684444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.684631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.684648] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.684829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.684845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.618 qpair failed and we were unable to recover it. 00:59:10.618 [2024-06-11 03:55:51.684952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.618 [2024-06-11 03:55:51.684983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.685261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.685293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.685502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.685533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.685680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.685712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.685851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.685882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.686047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.686079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.686282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.686313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.686584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.686615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.686749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.686780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.687066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.687099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.687329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.687359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.687640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.687670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.687837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.687867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.688028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.688061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.688266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.688298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.688512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.688548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.688710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.688749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.688918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.688935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.689142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.689174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.689326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.689356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.689584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.689615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.689888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.689919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.690156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.690188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.690348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.690379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.690601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.690632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.690774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.690804] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.691077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.691108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.691329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.619 [2024-06-11 03:55:51.691359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.619 qpair failed and we were unable to recover it. 00:59:10.619 [2024-06-11 03:55:51.691658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.691689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.691926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.691958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.692192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.692224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.692377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.692394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.692602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.692632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.692863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.692894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.693038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.693069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.693220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.693251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.693466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.693496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.693645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.693676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.693819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.693836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.693938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.693954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.694160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.694188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.694389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.694425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.694609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.694642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.694811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.694843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.695092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.695125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.695266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.695298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.695447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.695458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.695550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.695562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.695724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.695735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.695976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.696007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.696259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.696290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.696493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.696523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.696753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.696785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.697030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.697062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.697271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.697303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.697523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.697536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.697714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.697745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.697882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.697913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.698214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.698246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.698519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.698551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.698700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.698736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.698982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.698993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.699140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.699152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.699256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.699267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.699430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.620 [2024-06-11 03:55:51.699441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.620 qpair failed and we were unable to recover it. 00:59:10.620 [2024-06-11 03:55:51.699520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.699531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.699658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.699689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.699902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.699933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.700137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.700169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.700376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.700407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.700611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.700642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.700754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.700785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.700947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.700978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.701205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.701242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.701407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.701438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.701732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.701749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.701915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.701927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.702098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.702110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.702275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.702306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.702536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.702567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.702713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.702745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.703007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.703022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.703190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.703206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.703376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.703407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.703610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.703641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.703934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.703965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.704282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.704313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.704531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.704562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.704837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.704867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.705029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.705061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.705332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.705363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.705545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.705576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.705729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.705759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.706025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.706037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.706208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.706220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.706466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.706480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.706713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.706744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.706981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.707020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.707301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.707332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.707559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.621 [2024-06-11 03:55:51.707589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.621 qpair failed and we were unable to recover it. 00:59:10.621 [2024-06-11 03:55:51.707866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.707877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.708041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.708052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.708257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.708287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.708506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.708537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.708688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.708719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.708936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.708967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.709177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.709208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.709516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.709547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.709845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.709876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.710157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.710189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.710467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.710499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.710719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.710750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.710949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.710980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.711207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.711239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.711513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.711544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.711745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.711776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.711927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.711957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.712181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.712213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.712501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.712533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.712830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.712861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.713023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.713054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.713268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.713299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.713512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.713582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.713840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.713859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.714050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.714068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2412547 Killed "${NVMF_APP[@]}" "$@" 00:59:10.622 [2024-06-11 03:55:51.714194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.714212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.714451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.714467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.714655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.714672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 03:55:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:59:10.622 [2024-06-11 03:55:51.714881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.714894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.715086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 03:55:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:59:10.622 [2024-06-11 03:55:51.715098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.715270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.715282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 03:55:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:59:10.622 [2024-06-11 03:55:51.715473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.715485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.622 [2024-06-11 03:55:51.715591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.622 [2024-06-11 03:55:51.715604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.622 qpair failed and we were unable to recover it. 00:59:10.623 03:55:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:59:10.623 [2024-06-11 03:55:51.715709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.715723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.715847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.715860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 03:55:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:10.623 [2024-06-11 03:55:51.715975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.715987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.716101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.716112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.716291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.716302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.716463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.716473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.716668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.716679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.716864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.716876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.717046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.717058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.717165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.717176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.717285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.717297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.717407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.717418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.717529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.717541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.717702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.717716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.717873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.717884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.718062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.718074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.718165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.718176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.718346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.718356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.718462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.718474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.718570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.718581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.718703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.718715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.718876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.718888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.718997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.719014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.719198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.719209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.719314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.719326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.719489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.719499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.719606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.719617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.719713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.719724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.719834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.719845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.719955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.719967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.720193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.720205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.720389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.720401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.720513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.720524] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.720750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.720761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.623 qpair failed and we were unable to recover it. 00:59:10.623 [2024-06-11 03:55:51.720941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.623 [2024-06-11 03:55:51.720952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.721070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.721081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.721186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.721197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.721371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.721382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.721485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.721496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.721748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.721758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.721879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.721899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.722089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.722106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.722378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.722394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.722516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.722533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 03:55:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2413482 00:59:10.624 [2024-06-11 03:55:51.722642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.722658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.722830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.722847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.624 03:55:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2413482 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 03:55:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:59:10.624 [2024-06-11 03:55:51.723027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.723047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.723213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.723225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 03:55:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # '[' -z 2413482 ']' 00:59:10.624 [2024-06-11 03:55:51.723448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.723461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 03:55:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:10.624 [2024-06-11 03:55:51.723599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.723611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 03:55:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:59:10.624 [2024-06-11 03:55:51.723882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.723895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.724102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.724114] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b9 03:55:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:10.624 0 with addr=10.0.0.2, port=4420 00:59:10.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.724296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.724308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 03:55:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:59:10.624 [2024-06-11 03:55:51.724415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.724428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.724551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.724564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 03:55:51 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:10.624 [2024-06-11 03:55:51.724753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.724765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.725020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.725032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.725207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.725218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.725408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.725419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.725528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.624 [2024-06-11 03:55:51.725540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.624 qpair failed and we were unable to recover it. 00:59:10.624 [2024-06-11 03:55:51.725702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.725714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.725894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.725905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.726094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.726108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.726221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.726232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.726397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.726408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.726522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.726533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.726645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.726656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.726774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.726785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.726909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.726920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.727163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.727175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.727293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.727303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.727539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.727550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.727746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.727757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.727929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.727940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.728107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.728118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.728375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.728387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.728498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.728509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.728740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.728752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.728980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.728992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.729168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.729179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.729288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.729299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.729407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.729418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.729576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.729587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.729708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.729719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.729819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.729830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.729988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.730000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.730246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.730257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.730384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.730395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.730562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.730573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.730742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.730754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.730848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.730860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.730956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.730967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.731062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.731074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.731185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.731196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.731421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.731432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.731538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.731549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.731671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.731683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.625 [2024-06-11 03:55:51.731839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.625 [2024-06-11 03:55:51.731851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.625 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.731967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.731979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.732103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.732114] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.732305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.732316] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.732410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.732422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.732616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.732627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.732800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.732812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.732988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.732999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.733111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.733122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.733232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.733244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.733356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.733367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.733440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.733451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.733556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.733567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.733730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.733742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.733933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.733944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.734097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.734109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.734234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.734246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.734369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.734381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.734542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.734554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.734750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.734761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.734928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.734940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.735038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.735049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.735219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.735230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.735398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.735410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.735514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.735526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.735620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.735631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.735754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.735765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.735942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.735953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.736204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.736215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.736372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.736383] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.736594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.736605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.736707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.736718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.736809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.736823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.736928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.736940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.737104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.737116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.737294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.737305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.737386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.737397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.737575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.626 [2024-06-11 03:55:51.737587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.626 qpair failed and we were unable to recover it. 00:59:10.626 [2024-06-11 03:55:51.737707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.737718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.737820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.737832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.737948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.737959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.738120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.738131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.738248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.738259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.738430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.738441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.738591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.738602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.738775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.738787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.738947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.738958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.739181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.739193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.739364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.739375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.739495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.739507] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.739604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.739615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.739867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.739878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.739996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.740008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.740112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.740123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.740280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.740291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.740454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.740466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.740555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.740567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.740721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.740733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.740902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.740913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.741100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.741112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.741284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.741295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.741396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.741407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.741550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.741561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.741812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.741824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.742015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.742026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.742205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.742216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.742421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.627 [2024-06-11 03:55:51.742433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.627 qpair failed and we were unable to recover it. 00:59:10.627 [2024-06-11 03:55:51.742537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.742548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.742648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.742660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.742943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.742954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.743143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.743154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.743377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.743388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.743617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.743629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.743806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.743817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.743923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.743934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.744037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.744048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.744243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.744254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.744410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.744421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.744530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.744541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.744710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.744721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.744945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.744956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.745054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.745066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.745274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.745285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.745400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.745412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.745530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.745541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.745727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.745739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.745855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.745866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.746038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.746050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.746142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.746153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.746263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.746274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.746455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.746466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.746558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.746570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.746679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.746691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.746850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.746861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.747088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.747099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.747209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.747220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.747323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.747335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.747561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.747573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.747811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.747822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.628 [2024-06-11 03:55:51.747936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.628 [2024-06-11 03:55:51.747947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.628 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.748053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.748065] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.748162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.748173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.748288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.748299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.748415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.748426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.748590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.748601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.748690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.748701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.748868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.748880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.748972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.748984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.749143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.749155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.749315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.749326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.749499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.749509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.749630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.749642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.749726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.749739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.749855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.749866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.749986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.749997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.750182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.750194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.750368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.750380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.750478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.750489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.750649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.750660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.750765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.750776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.750895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.750906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.751085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.751097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.751164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.751175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.751279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.751291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.751403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.751414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.751583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.751595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.751754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.751766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.751892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.751903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.752152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.752164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.752400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.752411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.752517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.752529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.752638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.752649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.752901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.752913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.753071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.753082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.753200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.629 [2024-06-11 03:55:51.753211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.629 qpair failed and we were unable to recover it. 00:59:10.629 [2024-06-11 03:55:51.753307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.753319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.753563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.753574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.753753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.753764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.753990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.754001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.754167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.754179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.754346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.754357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.754472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.754483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.754593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.754604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.754769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.754780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.754936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.754948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.755083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.755095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.755321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.755332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.755456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.755467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.755636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.755647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.755756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.755768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.755937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.755949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.756026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.756038] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.756174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.756186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.756370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.756382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.756537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.756548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.756727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.756738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.756856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.756868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.757117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.757130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.757289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.757301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.757484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.757495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.757755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.757766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.757882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.757893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.758124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.758136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.758295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.758306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.758417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.758428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.758588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.758600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.761233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.761245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.761494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.630 [2024-06-11 03:55:51.761505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.630 qpair failed and we were unable to recover it. 00:59:10.630 [2024-06-11 03:55:51.761727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.761738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.761963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.761974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.762139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.762150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.762325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.762336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.762534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.762545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.762717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.762728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.762954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.762965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.763175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.763186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.763418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.763430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.763543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.763554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.763749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.763760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.763934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.763946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.764134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.764145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.764255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.764266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.764436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.764448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.764647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.764658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.764847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.764858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.765044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.765055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.765215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.765226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.765347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.765358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.765463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.765474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.765586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.765598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.765819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.765830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.766056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.766068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.766318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.766331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.766450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.766461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.766552] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:59:10.631 [2024-06-11 03:55:51.766591] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:59:10.631 [2024-06-11 03:55:51.766631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.766641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.766820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.766829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.767016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.767026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.767146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.767155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.631 [2024-06-11 03:55:51.767280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.631 [2024-06-11 03:55:51.767289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.631 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.767452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.767463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.767635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.767647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.767761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.767772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.767878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.767889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.768079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.768090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.768341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.768353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.768532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.768543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.768788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.768800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.768915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.768927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.769042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.769054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.769278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.769290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.769384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.769395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.769544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.769555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.769725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.769736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.769898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.769909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.770165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.770176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.770341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.770353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.770527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.770538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.770703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.770714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.770834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.770845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.770957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.770968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.771083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.771095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.771265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.771277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.771445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.771457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.771566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.771578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.771657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.771668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.771757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.771768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.771923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.771934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.772179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.772191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.772307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.772318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.772478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.772490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.632 qpair failed and we were unable to recover it. 00:59:10.632 [2024-06-11 03:55:51.772608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.632 [2024-06-11 03:55:51.772620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.772733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.772747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.772859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.772871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.773106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.773118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.773275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.773286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.773392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.773404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.773563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.773574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.773663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.773674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.773843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.773855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.773955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.773966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.774191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.774202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.774313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.774324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.774489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.774500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.774605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.774616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.774801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.774811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.774997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.775008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.775276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.775287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.775536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.775547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.775636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.775647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.775733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.775744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.775967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.775978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.776160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.776172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.776281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.776294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.776403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.776414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.776609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.776620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.776788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.776800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.776970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.776982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.777099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.777110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.777320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.777333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.777555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.777566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.777734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.777746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.777921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.777932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.778105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.778117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.778225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.633 [2024-06-11 03:55:51.778236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.633 qpair failed and we were unable to recover it. 00:59:10.633 [2024-06-11 03:55:51.778354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.778365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.778591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.778602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.778767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.778779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.778939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.778950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.779187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.779198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.779369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.779381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.779550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.779562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.779748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.779761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.779902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.779913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.780085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.780103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.780196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.780208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.780319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.780330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.780499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.780510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.780688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.780700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.780901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.780913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.781046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.781058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.781216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.781227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.781470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.781481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.781595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.781606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.781857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.781870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.781977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.781988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.782094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.782107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.782206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.782217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.782389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.782400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.782509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.782521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.782716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.782727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.782930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.782942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.783064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.783075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.783180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.783191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.783352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.783363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.783521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.783532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.783598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.783610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.783812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.783825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.783930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.783942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.784103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.634 [2024-06-11 03:55:51.784116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.634 qpair failed and we were unable to recover it. 00:59:10.634 [2024-06-11 03:55:51.784340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.784350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.784521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.784532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.784643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.784654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.784753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.784764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.784934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.784946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.785126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.785145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.785243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.785255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.785379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.785390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.785490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.785501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.785728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.785739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.785970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.785982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.786083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.786094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.786196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.786210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.786303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.786314] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.786497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.786508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.786679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.786691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.786863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.786875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.787047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.787062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.787182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.787194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.787350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.787362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.787534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.787546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.787626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.787637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.787733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.787744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.787992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.788003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.788168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.788180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.788327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.788338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.788528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.788539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.788709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.788721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.788892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.788904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.789007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.789024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.789152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.789164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.789338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.789350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.789602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.789613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.789709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.789720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.789812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.789823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.789921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.789932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.790091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.790102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.635 [2024-06-11 03:55:51.790213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.635 [2024-06-11 03:55:51.790225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.635 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.790384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.790396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.790565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.790577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.790680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.790692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.790852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.790863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.791114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.791125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.791310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.791321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.791475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.791487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.791656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.791667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.791774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.791785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.791933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.791944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.792123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.792135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.792270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.792282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.792369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.792381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.792492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.792503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.792614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.792628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.792742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.792754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.792860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.792872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.793096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.793108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.793350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.793362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.793567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.793578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.793766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.793778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.793954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.793965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.794154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.794167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.794350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.794362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.794536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.794547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.794709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.794721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.794949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.794960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.795082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.795093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.795256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.795268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.795375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.795386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.795553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.795564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 EAL: No free 2048 kB hugepages reported on node 1 00:59:10.636 [2024-06-11 03:55:51.795648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.795660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.795841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.795853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.795963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.636 [2024-06-11 03:55:51.795975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.636 qpair failed and we were unable to recover it. 00:59:10.636 [2024-06-11 03:55:51.796079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.796092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.796204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.796216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.796392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.796403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.796505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.796517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.796621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.796632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.796716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.796727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.796818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.796829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.796926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.796938] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.797035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.797047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.797237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.797249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.797355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.797367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.797481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.797493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.797588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.797599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.797701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.797712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.797872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.797883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.798059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.798071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.798186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.798198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.798374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.798386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.798562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.798573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.798674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.798686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.798916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.798929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.799035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.799048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.799167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.799178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.799429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.799440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.799606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.799617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.799769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.799781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.799957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.799969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.800085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.800096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.800323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.637 [2024-06-11 03:55:51.800335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.637 qpair failed and we were unable to recover it. 00:59:10.637 [2024-06-11 03:55:51.800435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.800447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.800546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.800557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.800744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.800756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.800875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.800887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.801071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.801084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.801188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.801200] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.801393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.801404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.801506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.801517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.801762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.801773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.801865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.801876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.801976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.801987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.802229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.802240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.802385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.802396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.802612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.802624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.802818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.802830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.802947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.802959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.803139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.803151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.803309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.803320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.803503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.803514] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.803698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.803710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.803880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.803891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.803967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.803979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.804092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.804105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.804272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.804284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.804534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.804545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.804742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.804753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.804940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.804952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.805132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.805145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.805345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.805356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.805475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.805487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.805656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.805668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.805787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.805799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.805961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.805973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.806146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.806158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.806332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.806344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.638 [2024-06-11 03:55:51.806570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.638 [2024-06-11 03:55:51.806582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.638 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.806829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.806840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.806931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.806942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.807120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.807131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.807294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.807305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.807476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.807487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.807603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.807615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.807775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.807787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.807893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.807905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.808081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.808093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.808215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.808226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.808475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.808486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.808602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.808613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.808791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.808803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.808980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.808992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.809107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.809119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.809342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.809354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.809540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.809551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.809644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.809655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.809769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.809781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.809896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.809908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.810024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.810036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.810195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.810206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.810332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.810344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.810519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.810530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.810633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.810644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.810732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.810743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.810909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.810921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.811087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.811100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.811203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.811215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.811322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.811333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.811434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.811445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.811601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.811612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.811775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.811787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.811895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.811907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.812083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.812095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.812205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.812219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.812328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.639 [2024-06-11 03:55:51.812340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.639 qpair failed and we were unable to recover it. 00:59:10.639 [2024-06-11 03:55:51.812496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.812506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.812663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.812674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.812769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.812780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.812971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.812982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.813083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.813094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.813211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.813223] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.813473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.813484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.813654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.813665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.813781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.813792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.814027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.814038] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.814217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.814228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.814325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.814336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.814461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.814473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.814647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.814658] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.814768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.814780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.814942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.814953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.815051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.815062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.815169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.815180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.815425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.815436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.815608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.815619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.815737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.815748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.815845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.815856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.816023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.816035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.816194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.816205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.816309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.816320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.816493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.816505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.816599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.816610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.816713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.816724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.816887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.816899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.817004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.817018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.817128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.817141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.817257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.817268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.817462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.817474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.817593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.817604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.817778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.640 [2024-06-11 03:55:51.817789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.640 qpair failed and we were unable to recover it. 00:59:10.640 [2024-06-11 03:55:51.817882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.817894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.818090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.818101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.818272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.818283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.818349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.818363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.818522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.818533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.818659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.818670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.818785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.818796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.818886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.818897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.818990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.819002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.819237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.819249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.819429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.819440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.819547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.819559] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.819664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.819675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.819847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.819858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.820027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.820039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.820201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.820212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.820387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.820399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.820572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.820583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.820820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.820832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.820963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.820974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.821077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.821088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.821255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.821266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.821366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.821378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.821480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.821491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.821598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.821610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.821707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.821718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.821840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.821852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.821960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.821971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.822060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.822071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.822314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.822326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.822395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.822406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.822578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.822589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.641 qpair failed and we were unable to recover it. 00:59:10.641 [2024-06-11 03:55:51.822658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.641 [2024-06-11 03:55:51.822670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.822894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.822906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.823075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.823086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.823266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.823278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.823452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.823464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.823629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.823641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.823753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.823764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.823934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.823946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.824112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.824124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.824233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.824246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.824369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.824380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.824472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.824485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.824602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.824614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.824718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.824730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.824966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.824978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.825137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.825149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.825307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.825318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.825474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.825486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.825691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.825703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.825936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.825947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.826051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.826063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.826222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.826233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.826391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.826402] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.826567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.826578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.826734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.826745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.826952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.826963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.827055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.827067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.827170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.827183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.642 [2024-06-11 03:55:51.827294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.642 [2024-06-11 03:55:51.827306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.642 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.827554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.827565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.827729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.827741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.827920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.827932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.828097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.828109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.828309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.828320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.828546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.828558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.828746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.828757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.828892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.828904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.829066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.829079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.829252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.829263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.829459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.829470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.829629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.829640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.829799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.829810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.830017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.830032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.830229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.830240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.830350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.830361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.830525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.830537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.830702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.830714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.830907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.830918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.831104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.831116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.831279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.831291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.831380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.831390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.831497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.831511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.831679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.831690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.831859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.831870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.831988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.832000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.832211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.832252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.832413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.832451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.832651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.832679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.832862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.832875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.833129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.643 [2024-06-11 03:55:51.833141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.643 qpair failed and we were unable to recover it. 00:59:10.643 [2024-06-11 03:55:51.833234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.833245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.833344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.833355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.833530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.833542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.833703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.833714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.833828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.833840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.834015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.834027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.834190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.834201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.834291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.834302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.834463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.834474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.834628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.834640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.834740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.834751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.834848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.834860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.835018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.835029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.835191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.835203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.835307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.835318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.835562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.835574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.835662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.835673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.835790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.835801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.835958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.835969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.836135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.836146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.836303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.836314] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.836417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.836429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.836605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.836616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.836779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.836791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.836963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.836975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.837179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.837190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.837368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.837379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.837476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.837487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.837603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.837614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.837711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.837727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.837894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.837906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.838017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.838031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.838282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.838294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.838522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.838533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.838658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.644 [2024-06-11 03:55:51.838669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.644 qpair failed and we were unable to recover it. 00:59:10.644 [2024-06-11 03:55:51.838853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.838864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.839023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.839034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.839146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.839157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.839327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.839338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.839563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.839574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.839690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.839701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.839871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.839882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.840064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.840076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.840243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.840255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.840440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.840451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.840558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.840569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.840792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.840803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.840991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.841003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.841209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.841220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.841320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.841331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.841479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.841490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.841609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.841621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.841807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.841818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.841982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.841993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.842086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.842100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.842225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.842236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.842331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.842343] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.842516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.842528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.842714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.842725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.842960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.842971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.843061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.843073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.843234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.843245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.843430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.843442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.843533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:59:10.645 [2024-06-11 03:55:51.843602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.843613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.843785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.843796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.843930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.843941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.844047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.844058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.844236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.844248] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.844422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.844433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.844525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.844537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.844625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.645 [2024-06-11 03:55:51.844636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.645 qpair failed and we were unable to recover it. 00:59:10.645 [2024-06-11 03:55:51.844752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.844765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.844958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.844970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.845084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.845096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.845191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.845203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.845411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.845422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.845527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.845539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.845653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.845665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.845860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.845871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.845981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.845994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.846234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.846246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.846417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.846429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.846598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.846609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.846772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.846784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.846941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.846956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.847060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.847072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.847312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.847323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.847501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.847512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.847698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.847710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.847935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.847947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.848111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.848123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.848237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.848249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.848369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.848381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.848544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.848555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.848720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.848732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.848907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.848919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.849084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.849097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.849202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.849213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.849324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.849336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.849496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.849508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.849678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.849691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.849861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.849873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.850043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.850055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.850220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.850232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.850420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.850433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.850553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.646 [2024-06-11 03:55:51.850564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.646 qpair failed and we were unable to recover it. 00:59:10.646 [2024-06-11 03:55:51.850678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.850690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.850917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.850929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.851090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.851103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.851297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.851309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.851423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.851434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.851597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.851611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.851697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.851710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.851888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.851899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.852114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.852128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.852358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.852370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.852473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.852486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.852593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.852605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.852678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.852689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.852804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.852816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.852980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.852993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.853180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.853192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.853301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.853313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.853468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.853479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.853657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.853673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.853850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.853862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.853951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.853963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.854064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.854076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.854262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.854274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.854437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.854448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.854560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.854572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.854729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.854740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.854964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.854975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.855145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.855157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.855312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.855324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.855441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.855452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.855561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.647 [2024-06-11 03:55:51.855572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.647 qpair failed and we were unable to recover it. 00:59:10.647 [2024-06-11 03:55:51.855674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.855686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.855791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.855803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.855929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.855941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.856106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.856118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.856223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.856235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.856327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.856339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.856439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.856450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.856677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.856688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.856846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.856857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.856972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.856983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.857093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.857105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.857265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.857277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.857389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.857400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.857577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.857589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.857683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.857695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.857788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.857801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.858049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.858062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.858161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.858172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.858268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.858279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.858376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.858387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.858490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.858502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.858698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.858709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.858892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.858904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.859075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.859089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.859196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.859208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.859312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.859324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.859408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.859419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.859512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.859525] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.859711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.859722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.859878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.859890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.859993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.860006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.860168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.860181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.860283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.860295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.860400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.860412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.648 [2024-06-11 03:55:51.860581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.648 [2024-06-11 03:55:51.860592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.648 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.860684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.860696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.860854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.860867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.861095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.861107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.861277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.861289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.861394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.861405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.861499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.861510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.861622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.861634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.861802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.861813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.862007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.862022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.862253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.862269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.862406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.862420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.862516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.862530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.862702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.862717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.862835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.862847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.863016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.863029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.863132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.863144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.863410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.863427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.863554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.863568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.863743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.863758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.863955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.863988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.864184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.864211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.864359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.864377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.864581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.864598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.864767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.864785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.864953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.864970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.865150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.865167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.865290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.865307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.865418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.865435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.865604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.865618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.865794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.865806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.866033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.866045] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.866146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.866158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.866360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.866374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.866538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.866549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.866662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.866674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.866783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.866794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.866908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.649 [2024-06-11 03:55:51.866920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.649 qpair failed and we were unable to recover it. 00:59:10.649 [2024-06-11 03:55:51.867023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.867034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.867218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.867229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.867348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.867359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.867522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.867534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.867714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.867726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.867851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.867862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.867970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.867982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.868144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.868155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.868291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.868303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.868407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.868419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.868597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.868610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.868735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.868748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.868935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.868947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.869045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.869056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.869239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.869252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.869422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.869435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.869555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.869568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.869728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.869740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.869856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.869868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.869975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.869987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.870082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.870094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.870258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.870272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.870403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.870426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.870609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.870626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.870892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.870909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.871028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.871045] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.871162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.871178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.871296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.871313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.871439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.871456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.871559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.871575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.871689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.871705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.871877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.871891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.872017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.872029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.872201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.650 [2024-06-11 03:55:51.872212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.650 qpair failed and we were unable to recover it. 00:59:10.650 [2024-06-11 03:55:51.872332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.872344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.872502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.872516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.872624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.872635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.872749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.872761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.872933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.872945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.873176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.873188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.873389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.873400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.873497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.873508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.873617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.873628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.873783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.873794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.873899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.873911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.874020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.874032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.874191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.874202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.874364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.874376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.874551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.874562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.874725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.874737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.874835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.874846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.874958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.874969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.875194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.875207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.875296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.875307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.875489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.875500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.875677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.875688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.875847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.875859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.875972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.875983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.876140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.876152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.876258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.876270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.876379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.876390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.876513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.876525] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.876625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.876637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.876796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.876807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.876925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.876936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.877109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.877121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.877289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.877301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.651 qpair failed and we were unable to recover it. 00:59:10.651 [2024-06-11 03:55:51.877412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.651 [2024-06-11 03:55:51.877425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.877537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.877549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.877725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.877737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.877916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.877928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.878133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.878145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.878324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.878336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.878444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.878456] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.878548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.878560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.878724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.878735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.878841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.878852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.879081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.879093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.879257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.879269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.879370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.879381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.879558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.879571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.879669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.879682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.879842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.879854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.880043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.880056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.880148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.880162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.880321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.880337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.880421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.880434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.880544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.880556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.880712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.880725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.880906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.880919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.881100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.881113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.881291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.881303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.881399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.881411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.881663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.881675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.881860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.881872] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.881964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.881975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.882150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.882162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.882276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.882288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.652 qpair failed and we were unable to recover it. 00:59:10.652 [2024-06-11 03:55:51.882394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.652 [2024-06-11 03:55:51.882407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.882589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.882602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.882782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.882794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.882961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.882974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.883078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.883093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.883193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.883205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.883330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.883342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.883443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.883455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.883579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.883591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.883681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.883692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.883838] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:59:10.653 [2024-06-11 03:55:51.883869] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:59:10.653 [2024-06-11 03:55:51.883877] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:59:10.653 [2024-06-11 03:55:51.883884] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:59:10.653 [2024-06-11 03:55:51.883889] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:59:10.653 [2024-06-11 03:55:51.883919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.883932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.884157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.884167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.884336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.884348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.884447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.884458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.884630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.884641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.884739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.884750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.884874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.884885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.884993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.885004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.885028] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:59:10.653 [2024-06-11 03:55:51.885193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.885205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.885138] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:59:10.653 [2024-06-11 03:55:51.885242] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:59:10.653 [2024-06-11 03:55:51.885243] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:59:10.653 [2024-06-11 03:55:51.885387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.885399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.885497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.885508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.885608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.885619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.885735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.885747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.885850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.885862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.885971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.885983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.886101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.886113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.886277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.886288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.653 [2024-06-11 03:55:51.886388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.653 [2024-06-11 03:55:51.886399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.653 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.886495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.886507] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.886659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.886671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.886844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.886856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.887090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.887101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.887212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.887224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.887390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.887401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.887503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.887515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.887642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.887654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.887772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.887783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.887939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.887951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.888110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.888122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.888303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.888314] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.888407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.888418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.888583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.888595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.888768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.888779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.888970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.888981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.889145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.889157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.889263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.889274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.889434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.889445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.889559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.889571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.889742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.889753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.889851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.889863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.889954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.889966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.890199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.890211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.890434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.890446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.890541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.890553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.890647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.890660] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.890781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.890792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.890903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.890914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.891209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.891221] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.891396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.891407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.891501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.891513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.891673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.891684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.891908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.891920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.892174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.892186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.892361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.892372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.654 [2024-06-11 03:55:51.892550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.654 [2024-06-11 03:55:51.892562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.654 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.892667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.892679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.892854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.892866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.893117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.893129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.893242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.893254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.893482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.893493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.893666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.893677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.893909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.893922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.894172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.894185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.894358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.894370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.894540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.894551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.894716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.894729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.894954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.894967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.895094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.895107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.895232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.895244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.895418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.895429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.895607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.895620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.895737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.895749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.895855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.895867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.895963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.895974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.896231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.896243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.896424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.896436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.896598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.896610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.896787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.896800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.896963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.896976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.897071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.897085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.897225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.897240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.897409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.897421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.897522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.897533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.897644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.897656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.897848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.897862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.898038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.898050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.898206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.898218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.898383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.898396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.898517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.898529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.898753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.898765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.898869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.655 [2024-06-11 03:55:51.898881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.655 qpair failed and we were unable to recover it. 00:59:10.655 [2024-06-11 03:55:51.898978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.898990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.899091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.899103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.899263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.899274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.899438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.899451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.899562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.899574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.899767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.899779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.899936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.899948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.900149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.900163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.900344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.900355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.900446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.900457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.900579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.900590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.900762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.900775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.900872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.900884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.901016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.901028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.901200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.901212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.901321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.901333] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.901491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.901503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.901672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.901685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.901765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.901777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.901948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.901960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.902103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.902135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.902357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.902391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.902497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.902522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.902711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.902723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.902905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.902916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.903075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.903087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.903260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.903272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.903448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.903460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.903628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.903639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.903739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.903750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.903919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.903930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.904022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.904035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.904118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.904129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.904245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.904258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.904426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.904437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.904616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.904627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.656 qpair failed and we were unable to recover it. 00:59:10.656 [2024-06-11 03:55:51.904799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.656 [2024-06-11 03:55:51.904811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.904907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.904919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.905108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.905120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.905239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.905252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.905481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.905492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.905677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.905688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.905808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.905820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.905984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.905995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.906114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.906125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.906240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.906251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.906449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.906463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.906569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.906580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.906687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.906699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.906921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.906932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.907108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.907120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.907190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.907202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.907381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.907393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.907572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.907584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.907805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.907816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.907986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.907997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.908158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.908169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.908260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.908271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.908394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.908406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.908566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.908578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.908762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.908782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.908914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.908931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.909141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.909159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.909277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.657 [2024-06-11 03:55:51.909294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.657 qpair failed and we were unable to recover it. 00:59:10.657 [2024-06-11 03:55:51.909414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.909430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.909667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.909683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.909814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.909830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.909969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.909985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.910111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.910128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.910258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.910275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.910430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.910447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.910620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.910637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.910815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.910829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.910929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.910945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.911050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.911062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.911288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.911299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.911407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.911418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.911581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.911592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.911739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.911751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.911921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.911932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.912079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.912091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.912266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.912277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.912383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.912394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.912482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.912494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.912652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.912663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.912836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.912848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.912946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.912958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.913092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.913103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.913277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.913289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.913451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.913463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.913622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.913634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.913761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.913771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.913871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.913882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.914052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.914064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.914154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.914165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.914361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.914372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.914509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.914521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.914630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.914643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.658 [2024-06-11 03:55:51.914819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.658 [2024-06-11 03:55:51.914832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.658 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.914934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.914948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.915158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.915183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.915367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.915389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.915621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.915638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.915762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.915779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.915961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.915979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.916099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.916116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.916287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.916305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.916496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.916515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.916637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.916655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.916773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.916791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.916924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.916941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.917051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.917068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.917257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.917273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.917446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.917459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.917578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.917590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.917764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.917776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.917962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.917974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.918078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.918093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.918219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.918231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.918343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.918356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.918528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.918540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.918645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.918659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.918817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.918829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.919029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.919042] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.919144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.919157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.919270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.919282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.919458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.919471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.919653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.919677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.919858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.919880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.920020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.920043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.920183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.920197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.920301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.920312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.920417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.659 [2024-06-11 03:55:51.920430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.659 qpair failed and we were unable to recover it. 00:59:10.659 [2024-06-11 03:55:51.920521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.920533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.920639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.920652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.920776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.920789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.920900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.920912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.921019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.921032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.921122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.921134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.921326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.921338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.921451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.921464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.921581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.921592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.921701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.921713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.921821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.921832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.921933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.921945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.922057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.922068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.922224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.922236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.922331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.922342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.922513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.922524] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.922708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.922719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.922907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.922919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.923078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.923090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.923215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.923226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.923353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.923364] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.923615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.923626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.923800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.923811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.923926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.923937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.924105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.924116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.924281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.924293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.924451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.924463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.924556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.924568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.924805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.924818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.924938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.924951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.925157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.925170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.925308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.925321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.925390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.925402] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.925493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.925505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.925675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.925698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.925887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.925905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.926074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.660 [2024-06-11 03:55:51.926091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.660 qpair failed and we were unable to recover it. 00:59:10.660 [2024-06-11 03:55:51.926207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.926224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.926333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.926350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.926471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.926488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.926603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.926620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.926803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.926821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.927015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.927033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.927214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.927231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.927431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.927448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.927555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.927572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.927746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.927762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.927924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.927936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.928098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.928110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.928218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.928229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.928342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.928353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.928465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.928476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.928569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.928580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.928751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.928763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.928937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.928949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.929125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.929137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.929307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.929319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.929411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.929423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.929528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.929539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.929642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.929653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.929822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.929834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.929942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.929963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.930077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.930094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.930212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.930229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.930335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.930353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.930457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.930474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.930659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.930677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.930794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.930809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.931020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.931033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.931130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.931143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.931323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.931336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.931492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.931504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.931666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.931678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.931787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.931799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.931980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.931996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.661 [2024-06-11 03:55:51.932195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.661 [2024-06-11 03:55:51.932207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.661 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.932384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.932397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.932489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.932501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.932724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.932736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.932861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.932874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.933033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.933046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.933216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.933228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.933318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.933329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.933397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.933409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.933590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.933602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.933774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.933787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.933898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.933910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.934075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.934089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.934187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.934199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.934290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.934303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.934398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.934410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.934511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.934523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.934703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.934716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.934874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.934886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.935089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.935103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.935201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.935214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.935373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.935384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.935491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.935503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.935613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.935626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.935806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.935818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.935979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.935992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.936188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.936204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.936382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.936395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.936519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.936531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.936630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.662 [2024-06-11 03:55:51.936641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.662 qpair failed and we were unable to recover it. 00:59:10.662 [2024-06-11 03:55:51.936809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.936821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.936932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.936944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.937024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.937036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.937135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.937147] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.937396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.937407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.937632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.937644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.937808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.937821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.937997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.938016] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.938109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.938122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.938281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.938293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.938449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.938461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.938573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.938585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.938741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.938754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.938819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.938832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.938939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.938951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.939053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.939066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.939223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.939235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.939334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.939346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.939450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.939462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.939621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.939633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.939732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.939745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.939900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.939912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.940021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.940032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.940160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.940172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.940281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.940292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.940477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.940489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.940654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.940665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.940755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.940766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.940859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.940870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.940951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.940962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.941065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.941077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.941255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.941266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.941363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.941373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.941462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.941473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.941565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.941576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.941651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.941662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.941779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.941792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.941899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.941910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.942019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.942030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.942205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.942217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.942379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.663 [2024-06-11 03:55:51.942391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.663 qpair failed and we were unable to recover it. 00:59:10.663 [2024-06-11 03:55:51.942619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.942632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.942810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.942822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.943047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.943059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.943245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.943256] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.943344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.943356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.943513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.943524] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.943694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.943705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.943818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.943828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.944016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.944028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.944141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.944152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.944406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.944417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.944522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.944533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.944641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.944652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.944821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.944832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.944989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.945001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.945097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.945108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.945306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.945318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.945541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.945552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.945650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.945661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.945766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.945777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.945866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.945877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.946123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.946134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.946300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.946311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.946481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.946493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.946664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.946675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.946778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.946789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.946950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.946961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.947074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.947086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.947335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.947347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.947451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.947462] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.947557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.947568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.947682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.947693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.947934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.947945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.948119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.948131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.948241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.948253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.948477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.948492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.664 qpair failed and we were unable to recover it. 00:59:10.664 [2024-06-11 03:55:51.948672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.664 [2024-06-11 03:55:51.948684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.948849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.948860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.948964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.948975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.949073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.949085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.949330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.949341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.949519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.949530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.949779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.949790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.949954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.949965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.950191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.950203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.950309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.950320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.950504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.950515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.950626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.950638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.950792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.950803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.950912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.950923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.951092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.951104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.951208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.951218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.951381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.951392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.951552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.951563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.951732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.951743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.951912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.951923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.952151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.952162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.952392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.952403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.952570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.952581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.952818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.952829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.952945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.952957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.953128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.953139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.953368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.953379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.953561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.953572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.953730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.953741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.665 [2024-06-11 03:55:51.953915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.665 [2024-06-11 03:55:51.953926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.665 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.954023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.954035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.954201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.954212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.954460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.954471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.954651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.954663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.954764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.954775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.954930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.954941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.955183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.955195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.955305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.955316] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.955435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.955446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.955696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.955709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.955884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.955894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.955986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.955997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.956092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.956103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.956213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.956225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.956402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.956414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.956530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.956541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.956633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.956645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.956826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.956837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.957061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.957073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.957245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.957257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.957453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.957465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.957633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.957644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.957901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.957912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.958163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.958175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.958267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.958279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.958375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.958386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.958550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.958561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.958736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.958748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.958912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.958923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.959121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.959133] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.959294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.959306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.959429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.959440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.959613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.959624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.959791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.959803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.959904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.959915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.960082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.960093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.960207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.960218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.666 [2024-06-11 03:55:51.960386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.666 [2024-06-11 03:55:51.960397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.666 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.960501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.960513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.960684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.960695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.960783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.960794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.960895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.960907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.961017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.961029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.961202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.961214] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.961400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.961412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.961570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.961581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.961687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.961699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.961876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.961887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.961994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.962005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.962183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.962196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.962335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.962346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.962571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.962582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.962740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.962750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.962842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.962854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.963034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.963046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.963152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.963164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.963286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.963298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.963523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.963534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.963638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.963649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.963888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.963899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.964001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.964020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.964192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.964203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.964488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.964500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.964613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.964626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.964732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.964743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.964965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.964976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.965147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.965159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.965386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.965398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.667 [2024-06-11 03:55:51.965570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.667 [2024-06-11 03:55:51.965581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.667 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.965678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.965690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.965872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.965884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.966059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.966070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.966250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.966261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.966447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.966458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.966653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.966664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.966893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.966904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.967146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.967157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.967319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.967331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.967443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.967455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.967572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.967583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.967686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.967698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.967887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.967898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.968012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.968024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.968131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.968142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.968302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.968313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.968402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.968413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.968522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.968533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.968709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.968721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.968817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.968829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.968941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.968955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.969041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.969052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.969207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.969218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.969371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.969382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.969537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.969548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.969612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.969623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.969740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.969750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.969923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.969934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.970104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.970115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.970205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.970217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.970407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.970419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.970580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.970591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.970693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.970704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.970959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.970970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.971156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.971168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.971361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.971373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.971553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.971565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.668 [2024-06-11 03:55:51.971723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.668 [2024-06-11 03:55:51.971735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.668 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.971985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.971997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.972163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.972174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.972357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.972368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.972546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.972557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.972658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.972669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.972847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.972859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.972966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.972978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.973139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.973150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.973256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.973267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.973360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.973371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.973468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.973479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.973583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.973595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.973708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.973720] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.973820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.973831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.973935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.973946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.974190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.974201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.974439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.974450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.974547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.974558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.974652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.974663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.974831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.974843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.975068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.975079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.975196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.975207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.975313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.975326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.975434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.975446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.975615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.975626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.975811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.975822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.976015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.976027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.976135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.976147] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.976261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.976272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.976381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.976393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.976559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.976571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.976745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.976756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.976934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.976946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.977124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.977136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.977243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.977255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.977430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.977441] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.977611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.977622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.977689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.669 [2024-06-11 03:55:51.977700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.669 qpair failed and we were unable to recover it. 00:59:10.669 [2024-06-11 03:55:51.977889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.977900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.978087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.978098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.978198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.978209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.978368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.978380] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.978565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.978576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.978765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.978776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.978948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.978959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.979063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.979075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.979178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.979189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.979284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.979295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.979542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.979554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.979692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.979703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.979814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.979825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.979981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.979992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.980236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.980248] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.980362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.980373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.980469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.980480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.980616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.980627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.980804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.980815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.981012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.981024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.981272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.981283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.981404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.981415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.981522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.981533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.981634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.981646] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.981736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.981751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.981921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.981932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.982055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.982068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.982213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.982224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.982407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.982418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.982553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.982565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.982682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.982693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.982798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.982809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.983056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.983067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.983179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.983190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.983439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.983450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.983738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.983749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.983905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.983917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.984151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.984163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.670 [2024-06-11 03:55:51.984271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.670 [2024-06-11 03:55:51.984283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.670 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.984438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.984449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.984516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.984527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.984694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.984705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.984862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.984873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.984967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.984978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.985204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.985215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.985371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.985382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.985498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.985510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.985668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.985679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.985850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.985862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.986041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.986053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.986228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.986239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.986361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.986372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.986462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.986473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.986697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.986709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.986879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.986889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.986990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.987002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.987168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.987179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.987401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.987412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.987525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.987536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.987638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.987649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.987757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.987768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.987865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.987877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.988113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.988125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.988300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.988311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.988466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.988479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.988634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.988645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.988795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.988806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.988893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.988904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.989016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.989027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.989152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.989163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.989330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.989341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.989498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.989509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.989667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.989678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.989788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.989800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.989969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.989981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.990141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.990153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.990383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.671 [2024-06-11 03:55:51.990394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.671 qpair failed and we were unable to recover it. 00:59:10.671 [2024-06-11 03:55:51.990517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.990528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.990689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.990700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.990800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.990811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.990889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.990900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.991017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.991029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.991308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.991320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.991421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.991432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.991522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.991534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.991719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.991730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.991953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.991965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.992074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.992086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.992285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.992297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.992403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.992415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.992641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.992652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.992828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.992839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.992959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.992971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.993129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.993141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.993374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.993385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.993621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.993632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.993735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.993746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.993870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.993881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.994089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.994101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.994264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.994275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.994433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.994445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.672 [2024-06-11 03:55:51.994556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.672 [2024-06-11 03:55:51.994567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.672 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.994815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.994826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.995007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.995023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.995263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.995278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.995381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.995393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.995632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.995644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.995764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.995775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.995945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.995956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.996098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.996111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.996357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.996369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.996487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.996498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.996669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.996680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.996916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.996928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.997186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.997198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.997354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.997366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.997556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.997567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.997692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.997704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.997929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.997940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.998112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.998124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.998334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.998345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.998586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.998598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.998872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.998883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.999057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.999069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.999317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.999330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.999501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.999515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.999784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.999796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:51.999967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:51.999979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:52.000092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:52.000103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:52.000296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:52.000307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:52.000557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:52.000569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:52.000693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:52.000704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:52.000872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:52.000883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:52.001044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:52.001057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:52.001306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:52.001317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:52.001571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:52.001582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:52.001837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:52.001848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:52.002093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:52.002106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:52.002264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:52.002276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:52.002452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:52.002464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:52.002717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:52.002729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:52.002916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:52.002927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:52.003119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:52.003130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.941 qpair failed and we were unable to recover it. 00:59:10.941 [2024-06-11 03:55:52.003422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.941 [2024-06-11 03:55:52.003434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.003657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.003670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.003920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.003932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.004186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.004198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.004437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.004448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.004721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.004732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.004958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.004969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.005209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.005220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.005419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.005430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.005653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.005664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.005941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.005952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.006217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.006229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.006477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.006489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.006660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.006672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.006875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.006886] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.006990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.007001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.007156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.007167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.007423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.007435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.007665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.007677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.007912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.007923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.008076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.008087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.008319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.008330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.008567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.008578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.008826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.008837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.009015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.009026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.009275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.009286] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.009394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.009405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.009521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.009533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.009721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.009732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.009958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.009970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.010141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.010154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.010407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.010418] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.010694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.010706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.010943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.010954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.011182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.011194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.011365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.011377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.011632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.011644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.011816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.011827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.012078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.012090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.012250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.012261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.012420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.012431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.012726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.012738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.012974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.012985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.013205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.013216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.013482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.013493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.013718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.013729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.013952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.013964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.014125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.014136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.014293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.014305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.014474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.014486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.014657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.014668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.014844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.014855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.015123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.015134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.015360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.015371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.015596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.015607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.015790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.015801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.016058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.016070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.016333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.016345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.016622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.016633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.016880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.016891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.017135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.017146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.017368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.017379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.017546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.017558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.942 [2024-06-11 03:55:52.017716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.942 [2024-06-11 03:55:52.017729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.942 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.017981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.017997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.018154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.018186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.018449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.018473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.018747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.018764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.019032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.019047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.019213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.019225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.019470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.019481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.019705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.019717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.019897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.019908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.020123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.020135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.020304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.020315] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.020552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.020564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.020786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.020798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.021043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.021056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.021321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.021332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.021441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.021452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.021703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.021714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.021964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.021976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.022147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.022158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.022329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.022340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.022532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.022544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.022660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.022672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.022844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.022855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.022947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.022958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.023137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.023149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.023373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.023384] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.023571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.023582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.023828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.023839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.024082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.024094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.024316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.024328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.024574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.024586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.024684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.024696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.024889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.024901] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.025072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.025085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.025183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.025195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.025358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.025369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.025542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.025554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.025785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.025796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.025962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.025974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.026205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.026217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.026485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.026498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.026606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.026617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.026864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.026876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.027127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.027141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.027380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.027394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.027621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.027633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.027847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.027858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.028104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.028115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.028204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.028215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.028487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.028498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.028614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.028625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.028853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.028865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.029037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.029048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.029257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.029268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.029509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.029521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.029697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.029708] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.029937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.029948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.030134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.030145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.030375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.030386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.030647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.030659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.030772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.030784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.031042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.031054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.031304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.031316] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.943 [2024-06-11 03:55:52.031482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.943 [2024-06-11 03:55:52.031493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.943 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.031689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.031701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.031926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.031938] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.032161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.032172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.032266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.032278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.032545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.032556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.032784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.032796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.032965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.032976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.033154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.033166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.033409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.033420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.033589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.033601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.033762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.033773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.034026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.034038] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.034292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.034304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.034467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.034478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.034643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.034654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.034834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.034846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.035064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.035076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.035241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.035253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.035432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.035444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.035620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.035631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.035855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.035868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.036108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.036120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.036394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.036405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.036672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.036683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.036953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.036964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.037123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.037134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.037358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.037368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.037610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.037620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.037888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.037900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.038122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.038134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.038302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.038314] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.038471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.038483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.038669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.038681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.038840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.038852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.039079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.039091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.039200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.039212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.039458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.039469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.039700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.039711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.039880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.039891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.040058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.040070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.040238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.040250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.040498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.040510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.040762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.040774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.040944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.040955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.041132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.041144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.041332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.041344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.041515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.041527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.041722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.041733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.041955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.041966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.042143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.042154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.042351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.042363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.042564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.042576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.042752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.042764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.043000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.043015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.944 [2024-06-11 03:55:52.043178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.944 [2024-06-11 03:55:52.043190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.944 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.043436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.043448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.043684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.043696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.043888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.043899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.044076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.044088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.044342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.044353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.044525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.044541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.044701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.044712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.044832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.044844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.045028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.045039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.045289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.045300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.045472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.045484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.045658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.045670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.045942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.045954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.046132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.046143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.046370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.046382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.046490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.046502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.046738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.046749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.046932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.046943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.047239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.047252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.047443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.047455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.047611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.047623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.047893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.047904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.048062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.048073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.048321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.048332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.048557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.048568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.048751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.048762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.048984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.048996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.049190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.049202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.049375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.049387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.049552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.049564] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.049740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.049752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.049881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.049893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.050065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.050078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.050233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.050245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.050413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.050424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.050584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.050596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.050763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.050775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.050902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.050914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.051099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.051111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.051355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.051367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.051556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.051568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.051790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.051801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.052080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.052092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.052336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.052348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.052519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.052530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.052786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.052799] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.053045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.053057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.053230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.053242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.053463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.053474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.053719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.053730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.053890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.053902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.054146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.054158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.054422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.054433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.054607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.054619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.054872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.054884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.055129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.055141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.055307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.055319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.055542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.055554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.055786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.055798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.055960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.055972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.056131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.056143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.056394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.056406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.056522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.056533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.945 [2024-06-11 03:55:52.056758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.945 [2024-06-11 03:55:52.056769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.945 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.056930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.056941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.057116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.057127] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.057304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.057315] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.057588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.057600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.057849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.057861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.058103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.058115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.058288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.058300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.058468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.058479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.058643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.058655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.058769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.058781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.059023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.059035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.059238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.059250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.059407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.059419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.059647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.059659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.059892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.059903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.060074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.060086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.060348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.060360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.060535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.060546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.060722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.060734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.060893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.060904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.061082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.061094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.061311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.061324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.061555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.061567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.061799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.061811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.062005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.062024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.062269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.062280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.062469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.062481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.062641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.062654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.062829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.062841] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.063007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.063023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.063135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.063147] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.063325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.063336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.063495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.063507] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.063751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.063762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.064020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.064032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.064228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.064240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.064441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.064453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.064650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.064661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.064886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.064898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.065055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.065067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.065334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.065346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.065508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.065520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.065766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.065777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.065978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.065989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.066265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.066277] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.066453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.066464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.066637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.066649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.066831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.066843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.067025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.067038] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.067289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.067300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.067544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.067556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.067739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.067750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.067997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.068008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.068262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.068274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.068450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.068461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.068710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.946 [2024-06-11 03:55:52.068721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.946 qpair failed and we were unable to recover it. 00:59:10.946 [2024-06-11 03:55:52.068815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.068827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.069070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.069082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.069238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.069250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.069454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.069466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.069625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.069637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.069861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.069874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.070035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.070047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.070216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.070227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.070403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.070415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.070664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.070676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.070899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.070910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.071087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.071099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.071323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.071335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.071510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.071522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.071776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.071788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.072035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.072047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.072206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.072218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.072469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.072481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.072649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.072661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.072882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.072893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.073054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.073066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.073302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.073314] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.073486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.073498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.073690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.073702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.073949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.073961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.074138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.074150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.074399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.074411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.074635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.074647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.074916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.074927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.075174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.075186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.075431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.075443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.075694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.075706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.075901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.075913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.076161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.076174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.076362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.076374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.076599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.076611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.076875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.076898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.077144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.077156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.077392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.077404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.077581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.077593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.077767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.077779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.077978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.077990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.078086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.078098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.078276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.078288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.078385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.078396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.078634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.078649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.078875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.078888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.079061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.079073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.079297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.079309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.079415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.079427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.079653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.079666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.079847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.079858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.080091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.080102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.080277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.080288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.080479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.080490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.080661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.080672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.080840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.080853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.081029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.081041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.081207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.081218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.081480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.081492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.081716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.081727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.081902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.081913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.082097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.082109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.082205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.947 [2024-06-11 03:55:52.082217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.947 qpair failed and we were unable to recover it. 00:59:10.947 [2024-06-11 03:55:52.082389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.082400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.082589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.082601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.082859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.082871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.083094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.083106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.083204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.083215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.083459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.083471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.083748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.083759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.083955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.083967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.084154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.084180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.084365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.084382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.084608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.084624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.084745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.084758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.084893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.084905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.085082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.085093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.085341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.085353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.085599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.085610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.085806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.085818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.085911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.085923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.086118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.086129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.086385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.086396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.086663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.086674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.086951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.086964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.087240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.087252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.087496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.087507] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.087731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.087742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.087919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.087931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.088154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.088166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.088435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.088447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.088724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.088735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.089017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.089029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.089272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.089284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.089508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.089520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.089701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.089713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.089889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.089900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.090091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.090103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.090283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.090295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.090549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.090560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.090810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.090821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.090982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.090994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.091118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.091131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.091242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.091253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.091375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.091387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.091632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.091643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.091760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.091771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.092029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.092041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.092287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.092299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.092574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.092585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.092791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.092802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.093057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.093070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.093313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.093325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.093503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.093514] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.093684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.093696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.093882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.093893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.094143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.094155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.094327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.094338] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.094587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.094598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.094755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.094767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.094952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.094964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.095151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.095163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.095329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.095341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.095508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.095519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.095793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.095807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.096107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.096119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.096368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.096379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.096493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.096505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.096676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.948 [2024-06-11 03:55:52.096688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.948 qpair failed and we were unable to recover it. 00:59:10.948 [2024-06-11 03:55:52.096938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.096949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.097201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.097213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.097390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.097401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.097648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.097659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.097837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.097848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.098096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.098108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.098229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.098241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.098442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.098453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.098657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.098668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.098862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.098874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.099089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.099120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.099376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.099388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.099609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.099621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.099816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.099827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.099987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.099999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.100176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.100188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.100454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.100465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.100637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.100649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.100877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.100888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.101007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.101022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.101187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.101199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.101395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.101406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.101505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.101516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.101678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.101690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.101802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.101813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.102006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.102027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.102301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.102312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.102524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.102535] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.102725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.102736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.102982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.102993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.103241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.103253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.103499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.103511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.103704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.103715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.103908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.103919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.104032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.104043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.104203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.104217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.104329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.104341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.104501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.104512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.104736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.104747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.104855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.104867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.105109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.105120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.105292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.105303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.105554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.105565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.105741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.105752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.105924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.105935] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.106051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.106063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.106296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.106307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.106402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.106414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.106634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.106646] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.949 [2024-06-11 03:55:52.106809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.949 [2024-06-11 03:55:52.106820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.949 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.107066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.107078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.107246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.107258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.107348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.107358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.107525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.107536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.107692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.107703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.107901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.107913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.108073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.108085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.108241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.108253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.108436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.108447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.108714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.108726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.108907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.108919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.109189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.109201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.109341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.109364] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.109626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.109643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.109926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.109942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.110152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.110170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.110361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.110378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.110566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.110583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.110766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.110782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.110899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.110916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.111154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.111171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.111373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.111390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.111573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.111589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.111779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.111795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.112098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.112110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.112303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.112315] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.112477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.112489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.112712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.112723] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.112901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.112912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.113138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.113150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.113311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.113323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.113499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.113510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.113736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.113747] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.113915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.113927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.114104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.114115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.114298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.114309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.114414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.114426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.114586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.114597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.114890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.114901] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.115150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.115162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.115433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.115444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.115619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.115630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.115805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.115817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.115993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.116004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.116128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.116139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.116324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.116336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.116584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.116595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.116846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.116858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.117120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.117131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.117298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.117310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.117532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.117544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.117713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.117725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.117975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.117989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.118227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.118240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.118411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.118422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.118579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.118590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.118763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.118774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.118942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.118954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.119200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.119211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.119458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.950 [2024-06-11 03:55:52.119469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.950 qpair failed and we were unable to recover it. 00:59:10.950 [2024-06-11 03:55:52.119719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.119731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.120004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.120019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.120181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.120192] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.120417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.120429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.120541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.120553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.120775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.120787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.120912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.120925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.121215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.121227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.121394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.121407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.121650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.121662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.121872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.121884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.122138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.122150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.122337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.122349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.122596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.122608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.122866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.122877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.123115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.123128] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.123303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.123315] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.123484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.123496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.123726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.123738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.123849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.123862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.124050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.124062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.124305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.124317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.124484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.124495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.124608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.124619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.124798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.124810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.124987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.124998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.125163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.125175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.125352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.125364] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.125569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.125580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.125698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.125709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.125827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.125839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.125952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.125963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.126081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.126095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.126214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.126226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.126402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.126414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.126584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.126596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.126815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.126827] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.127004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.127018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.127194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.127206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.127397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.127409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.127661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.127674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.127842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.127854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.128046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.128058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.128281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.128292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.128489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.128500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.128746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.128757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.128936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.128948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.129118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.129129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.129402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.129414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.129582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.129594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.129848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.129860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.130056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.130068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.130360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.130371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.130569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.130581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.130775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.130788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.130959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.130971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.131166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.131178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.131421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.131433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.131605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.131616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.131785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.131797] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.132073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.132085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.132337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.951 [2024-06-11 03:55:52.132348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.951 qpair failed and we were unable to recover it. 00:59:10.951 [2024-06-11 03:55:52.132469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.132480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.132729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.132740] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.132993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.133004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.133243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.133255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.133368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.133379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.133627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.133638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.133896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.133907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.134142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.134155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.134402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.134414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.134572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.134584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.134848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.134862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.135037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.135049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.135274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.135285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.135455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.135466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.135703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.135714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.135820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.135832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.136102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.136114] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.136287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.136298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.136482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.136493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.136669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.136681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.136841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.136852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.137037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.137049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.137225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.137236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.137343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.137354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.137479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.137491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.137663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.137675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.137901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.137912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.138157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.138169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.138400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.138411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.138573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.138585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.138743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.138755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.138938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.138949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.139117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.139129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.139306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.139317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.139539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.139551] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.139739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.139750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.139972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.139983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.140151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.140163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.140408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.140420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.140645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.140656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.140888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.140900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.141093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.141105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.141280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.141291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.141478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.141490] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.141750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.141761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.141954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.141966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.142241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.142253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.142499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.142510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.142778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.142790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.142909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.142920] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.143099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.143112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.143339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.143351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.143507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.143519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.143767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.143779] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.144046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.144057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.144301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.144313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.144490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.144502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.144676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.144687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.952 qpair failed and we were unable to recover it. 00:59:10.952 [2024-06-11 03:55:52.144911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.952 [2024-06-11 03:55:52.144922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.145185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.145196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.145353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.145365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.145614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.145625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.145820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.145832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.146008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.146029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.146204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.146215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.146437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.146449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.146679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.146690] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.146942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.146953] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.147073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.147085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.147280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.147291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.147513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.147524] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.147623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.147634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.147793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.147805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.147959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.147970] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.148196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.148208] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.148399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.148411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.148633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.148645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.148848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.148861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.149062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.149074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.149296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.149307] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.149578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.149589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.149838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.149850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.149973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.149985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.150232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.150244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.150406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.150417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.150574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.150585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.150757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.150769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.151017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.151029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.151135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.151146] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.151389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.151400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.151587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.151598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.151719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.151730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.151981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.151993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.152244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.152255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.152498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.152509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.152675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.152686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.152908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.152919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.153157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.153169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.153358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.153369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.153654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.153666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.153944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.153956] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.154228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.154240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.154489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.154500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.154660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.154671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.154894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.154905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.155073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.155085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.155319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.155331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.155497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.155508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.155760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.155772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.156022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.156033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.156284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.156296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.156534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.156546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.156669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.156680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.156904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.156915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.157189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.157201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.157445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.157457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.157625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.157636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.157844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.157857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.158032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.158044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.953 [2024-06-11 03:55:52.158221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.953 [2024-06-11 03:55:52.158233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.953 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.158496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.158508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.158684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.158696] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.158946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.158957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.159113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.159124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.159287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.159298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.159564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.159575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.159743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.159754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.159943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.159955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.160166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.160178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.160384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.160396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.160505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.160517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.160705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.160717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.160939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.160950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.161199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.161211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.161306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.161317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.161506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.161518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.161680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.161691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.161873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.161884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.162047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.162059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.162283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.162295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.162461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.162473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.162699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.162710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.162866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.162877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.162986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.162997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.163218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.163239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.163441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.163458] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.163716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.163733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.163944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.163960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.164172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.164189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.164384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.164400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.164533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.164546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.164707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.164719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.164833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.164844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.165088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.165101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.165272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.165284] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.165511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.165522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.165699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.165710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.165953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.165967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.166160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.166171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.166422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.166434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.166611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.166623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.166742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.166753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.167021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.167033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.167150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.167162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.167395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.167406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.167523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.167534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.167707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.167719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.167986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.167998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.168175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.168187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.168297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.168309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.168505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.168517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.168696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.168707] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.168809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.168820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.168922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.168933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.169159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.169170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.169334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.169346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.169623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.169635] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.169808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.169819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.169977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.169989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.170221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.170233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.954 [2024-06-11 03:55:52.170354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.954 [2024-06-11 03:55:52.170365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.954 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.170629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.170640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.170879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.170890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.171065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.171077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.171303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.171314] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.171489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.171501] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.171724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.171736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.171905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.171917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.172191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.172203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.172450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.172461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.172659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.172670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.172845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.172856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.172959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.172972] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.173128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.173140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.173386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.173398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.173504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.173515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.173769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.173780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.173902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.173915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.174073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.174085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.174301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.174313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.174532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.174544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.174805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.174816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.175067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.175079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.175239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.175250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.175443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.175455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.175612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.175623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.175800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.175812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.175935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.175947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.176056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.176068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.176168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.176180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.176401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.176413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.176639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.176650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.176809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.176821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.176983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.176994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.177198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.177210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.177413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.177425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.177519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.177530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.177690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.177701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.177826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.177837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.178023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.178035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.178206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.178217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.178414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.178426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.178691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.178703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.178938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.178950] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.179204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.179216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.179392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.179404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.179627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.179639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.179885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.179897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.180142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.180154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.180313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.180326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.180494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.180505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.180680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.180692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.180940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.180951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.181119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.181131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.181310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.181321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.181545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.181556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.181806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.181816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.181984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.955 [2024-06-11 03:55:52.181998] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.955 qpair failed and we were unable to recover it. 00:59:10.955 [2024-06-11 03:55:52.182160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.182172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.182424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.182435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.182694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.182705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.182900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.182911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.183111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.183123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.183399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.183411] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.183583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.183595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.183771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.183783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.183973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.183984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.184091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.184104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.184308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.184320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.184504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.184516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.184698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.184710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.184985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.184997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.185242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.185254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.185494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.185505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.185729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.185741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.185853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.185864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.185972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.185983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.186159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.186170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.186393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.186405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.186655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.186667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.186770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.186782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.186967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.186978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.187140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.187151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.187323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.187335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.187612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.187623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.187901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.187912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.188090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.188102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.188330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.188342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.188566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.188578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.188777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.188789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.189015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.189027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.189199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.189211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.189459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.189470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.189642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.189654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.189830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.189842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.190112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.190123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.190393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.190405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.190526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.190539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.190713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.190725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.190895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.190907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.191077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.191088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.191207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.191218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.191448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.191460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.191629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.191640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.191731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.191743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.191859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.191871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.192032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.192043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.192148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.192159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.192316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.192327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.192534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.192545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.192771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.192782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.192977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.192988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.193237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.193249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.193372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.193383] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.193542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.193553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.193676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.193687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.193845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.193856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.194024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.194036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.194263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.194275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.194519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.956 [2024-06-11 03:55:52.194531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.956 qpair failed and we were unable to recover it. 00:59:10.956 [2024-06-11 03:55:52.194710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.194721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.194943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.194954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.195202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.195213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.195379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.195391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.195617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.195628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.195852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.195864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.196085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.196097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.196326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.196337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.196494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.196505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.196752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.196764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.196988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.197000] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.197118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.197145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.197390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.197407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.197666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.197682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.197867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.197884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.198119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.198136] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.198324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.198340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.198596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.198618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.198837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.198853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.199089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.199105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.199385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.199402] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.199613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.199629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.199881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.199898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.200078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.200094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.200305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.200321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.200611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.200627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.200809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.200825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.201107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.201124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.201359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.201375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.201580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.201593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.201762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.201773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.201942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.201954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.202178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.202189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.202418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.202429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.202702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.202713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.202952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.202963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.203077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.203089] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.203339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.203352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.203521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.203533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.203655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.203666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.203774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.203785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.203955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.203967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.204168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.204180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.204431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.204442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.204719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.204737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.204943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.204959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.205199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.205216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.205408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.205425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.205726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.205743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.205873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.205889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.206079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.206096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.206279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.206295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.206410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.206426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.206631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.206644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.206895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.206906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.207171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.207183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.207358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.207370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.207552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.207566] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.207765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.207777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.207900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.207911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.208158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.208170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.957 [2024-06-11 03:55:52.208420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.957 [2024-06-11 03:55:52.208431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.957 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.208652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.208664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.208924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.208936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.209057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.209069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.209229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.209240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.209483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.209494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.209652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.209663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.209820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.209831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.210079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.210091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.210184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.210194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.210427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.210438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.210630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.210641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.210809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.210820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.210979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.210991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.211277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.211288] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.211380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.211391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.211633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.211645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.211812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.211823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.211998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.212012] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.212181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.212193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.212391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.212402] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.212654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.212665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.212900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.212912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.213144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.213157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.213339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.213350] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.213575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.213587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.213790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.213801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.214050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.214061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.214306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.214317] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.214566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.214577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.214771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.214782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.214903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.214915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.215189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.215201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.215449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.215460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.215621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.215633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.215791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.215802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.215960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.215974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.216154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.216166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.216396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.216408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.216604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.216615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.216806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.216817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.217054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.217066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.217318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.217329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.217580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.217592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.217698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.217708] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.217932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.217943] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.218066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.218079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.218304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.218315] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.218582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.218593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.218753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.218764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.219017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.219028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.219230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.219241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.219514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.219525] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.219639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.219650] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.958 [2024-06-11 03:55:52.219885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.958 [2024-06-11 03:55:52.219897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.958 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.220167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.220178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.220350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.220361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.220586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.220597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.220820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.220831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.220997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.221018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.221187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.221198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.221476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.221487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.221675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.221687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.221924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.221936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.222097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.222110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.222331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.222342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.222514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.222525] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.222771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.222782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.222940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.222952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.223173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.223184] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.223438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.223449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.223544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.223556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.223801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.223812] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.223925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.223936] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.224112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.224124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.224370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.224382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.224654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.224681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.224852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.224863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.225159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.225170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.225411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.225423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.225608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.225620] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.225812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.225824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.226049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.226061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.226163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.226175] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.226267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.226278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.226455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.226466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.226713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.226724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.226997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.227013] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.227240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.227251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.227484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.227496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.227671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.227682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.227849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.227860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.227983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.227995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.228229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.228241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.228404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.228415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.228575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.228586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.228833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.228844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.229030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.229043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.229201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.229213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.229436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.229447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.229735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.229746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.229983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.229994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.230268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.230280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.230527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.230539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.230697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.230709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.230957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.230969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.231142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.231154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.231336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.231348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.231580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.231591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.231858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.231870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.232115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.232126] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.232296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.232308] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.232461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.232473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.232660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.232672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.232897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.232908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.233127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.233139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.233241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.233254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.233501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.233513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.233700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.233711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.959 qpair failed and we were unable to recover it. 00:59:10.959 [2024-06-11 03:55:52.233803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.959 [2024-06-11 03:55:52.233815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.233992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.234003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.234242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.234254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.234438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.234450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.234626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.234637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.234829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.234840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.235035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.235047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.235226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.235237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.235428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.235440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.235603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.235614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.235880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.235893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.236074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.236085] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.236311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.236322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.236487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.236500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.236669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.236681] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.236905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.236916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.237081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.237092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.237325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.237336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.237456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.237467] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.237714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.237726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.237948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.237960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.238135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.238147] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.238390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.238401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.238650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.238662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.238921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.238940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.239125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.239142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.239354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.239370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.239627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.239644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.239906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.239922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.240187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.240204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01a8000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.240332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.240346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.240471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.240482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.240598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.240609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.240714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.240727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.240953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.240964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.241150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.241162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.241386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.241397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.241638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.241652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.241865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.241877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.242170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.242181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.242381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.242393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.242500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.242512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.242707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.242719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.242895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.242906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.243066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.243078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.243243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.243254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.243429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.243440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.243594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.243605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.243877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.243889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.244006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.244020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.244130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.244141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.244316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.244327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.244550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.244561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.244794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.244806] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.244973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.244984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.245208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.960 [2024-06-11 03:55:52.245220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.960 qpair failed and we were unable to recover it. 00:59:10.960 [2024-06-11 03:55:52.245396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.245408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.245586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.245597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.245827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.245840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.246075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.246087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.246195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.246207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.246471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.246483] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.246674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.246685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.246859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.246871] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.247127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.247139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.247330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.247341] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.247592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.247604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.247857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.247868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.248109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.248121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.248306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.248318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.248416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.248427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.248623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.248634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.248876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.248887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.249065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.249076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.249251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.249263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.249365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.249377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.249536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.249547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.249770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.249781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.250021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.250033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.250236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.250247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.250492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.250504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.250678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.250689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.250855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.250866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.250978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.250989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.251256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.251267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.251510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.251521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.251650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.251661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.251889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.251901] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.252126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.252138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.252313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.252324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.252574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.252585] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.252781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.252792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.253038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.253050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.253230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.253241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.253466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.253477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.253645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.253656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.253837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.253849] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.254119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.254130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.254299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.254310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.254467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.254479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.254640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.254652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.254894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.254905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.255154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.255166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.255338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.255349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.255544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.255557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.255781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.255792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.256029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.256041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.256145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.256156] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.256338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.256349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.256486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.256497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.256767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.256778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.256972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.256984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.257157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.257169] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.257363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.257374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.257615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.257626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.257873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.257885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.258059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.258071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.258293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.258305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.961 [2024-06-11 03:55:52.258533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.961 [2024-06-11 03:55:52.258544] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.961 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.258818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.258829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.259072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.259084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.259361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.259372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.259621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.259633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.259806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.259817] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.259989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.260001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.260270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.260282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.260454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.260465] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.260624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.260636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.260810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.260821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.260980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.260991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.261178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.261190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.261470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.261482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.261682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.261693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.261852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.261863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.262028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.262040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.262220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.262231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.262416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.262427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.262604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.262615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.262839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.262850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.263066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.263078] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.263301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.263312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.263557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.263568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.263759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.263770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.263997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.264012] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.264265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.264278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.264503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.264515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.264765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.264777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.265003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.265018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.265130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.265141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.265389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.265400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.265507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.265518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.265680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.265691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.265965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.265976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.266199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.266211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.266454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.266466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.266650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.266661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.266883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.266895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.267074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.267086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.267319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.267330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.267515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.267526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.267693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.267704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.267809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.267820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.267920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.267931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.268179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.268191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.268360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.268372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.268542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.268553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.268715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.268726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.268962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.268973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.269151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.269163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.269409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.269420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.269677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.269688] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.269926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.269937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.270049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.270061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.270218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.270230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.270398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.270409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.270643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.270654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.270761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.270772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.270943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.270955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.271225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.271236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.271462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.271473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.271721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.271732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.271908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.962 [2024-06-11 03:55:52.271919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.962 qpair failed and we were unable to recover it. 00:59:10.962 [2024-06-11 03:55:52.272073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.272084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.272337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.272349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.272575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.272588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.272777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.272788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.272978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.272990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.273231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.273242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.273401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.273413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.273589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.273600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.273790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.273802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.274054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.274066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.274224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.274235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.274481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.274492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.274596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.274608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.274828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.274839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.275100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.275111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.275381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.275392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.275567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.275579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.275754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.275765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.275934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.275945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.276064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.276076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.276326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.276337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.276586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.276597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.276863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.276875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.277120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.277132] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.277305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.277316] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.277482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.277493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.277663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.277675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.277833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.277844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.278015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.278026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.278253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.278264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.278422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.278433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.278663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.278674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.278912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.278924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.279200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.279213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.279469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.279480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.279722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.279734] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.279893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.279904] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.280159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.280170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.280425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.280436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.280699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.280710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.280947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.280958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.281156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.281168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.281392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.281407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.281631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.281642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.281814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.281826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.282079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.282091] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.282280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.282291] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.282401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.282412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.282571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.282583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.282751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.282762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.283027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.283039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.283263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.283275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.283459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.283471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.283599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.283610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.283892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.283903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.284089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.284102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.284396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.284408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.284504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.284515] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.284758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.284769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.285043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.963 [2024-06-11 03:55:52.285054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.963 qpair failed and we were unable to recover it. 00:59:10.963 [2024-06-11 03:55:52.285299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.285311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.285534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.285546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.285809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.285820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.285982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.285992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.286172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.286185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.286406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.286417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.286671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.286682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.286841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.286852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.287015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.287027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.287202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.287213] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.287312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.287323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.287525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.287536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.287705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.287717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.287901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.287913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.288165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.288177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.288428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.288439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.288675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.288686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.288887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.288898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.289082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.289094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.289252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.289263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.289454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.289466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.289627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.289638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.289819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.289832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.290035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.290047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.290284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.290296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.290476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.290487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.290643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.290654] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.290758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.290769] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.290871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.290882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.291162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.291173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.291365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.291377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.291561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.291572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.291767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.291778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.291937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.291948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.292198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.292210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.292419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.292430] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.292531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.292543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.292792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.292803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.293028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.293040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.293230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.293242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.293346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.293358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.293607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.293618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.293840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.293851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.294117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.294129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.294311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.294322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.294478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.294489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.294755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.294766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.294955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.294966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.295191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.295203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.295431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.295442] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.295609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.295621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.295848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.295859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.296092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.296104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.296226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.296238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.296462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.296474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.296632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.296643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.296873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.296885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.297124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.297135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.297387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.297399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.297568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.297579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.297750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.297762] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.297920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.297931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.964 qpair failed and we were unable to recover it. 00:59:10.964 [2024-06-11 03:55:52.298087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.964 [2024-06-11 03:55:52.298101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.298201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.298212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.298454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.298466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.298568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.298580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.298745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.298757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.299029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.299041] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.299271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.299282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.299529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.299540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.299765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.299777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.299941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.299952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.300186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.300198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.300367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.300379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.300626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.300637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.300890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.300902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.301147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.301159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.301336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.301347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.301592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.301604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.301714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.301725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.301921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.301932] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.302096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.302107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.302221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.302232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.302388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.302399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.302565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.302576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.302825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.302836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.303086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.303097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.303342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.303353] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.303525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.303537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.303707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.303719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.303891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.303903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.304059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.304071] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.304318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.304330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.304507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.304519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.304759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.304771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.304969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.304981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.305204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.305215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.305436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.305447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.305644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.305656] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.305904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.305915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.306020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.306032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.306241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.306252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.306376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.306390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.306613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.306624] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.306846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.306858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.307082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.307093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.307278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.307290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.307469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.307481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.307735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.307746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.307903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.307914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.308179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.308191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.308366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.308377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.308597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.308609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.308778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.308789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.309061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.309072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.309323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.309334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.309442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.309453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.309707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.309719] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.309904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.309916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.310076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.310088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.310243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.310255] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.310503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.310514] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.310758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.310770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.311046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.311057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.311262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.311273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.311500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.311511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.311698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.311710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.311955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.965 [2024-06-11 03:55:52.311966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.965 qpair failed and we were unable to recover it. 00:59:10.965 [2024-06-11 03:55:52.312128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.312140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.312387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.312398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.312640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.312651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.312842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.312853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.313025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.313037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.313140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.313151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.313377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.313389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.313562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.313574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.313772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.313784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.314036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.314047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.314286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.314298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.314459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.314471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.314692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.314704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.314944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.314955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.315208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.315222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.315468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.315479] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.315727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.315738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.315912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.315924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.316151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.316163] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.316321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.316332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.316560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.316572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.316740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.316752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.316995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.317007] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.317260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.317272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.317535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.317547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.317659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.317670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.317863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.317874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.318046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.318057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.318234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.318246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.318447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.318459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.318572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.318583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.318753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.318765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.318872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.318884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.319001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.319015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.319238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.319250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.319517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.319528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.319778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.319790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.320036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.320047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.320237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.320249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.320434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.320446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.320687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.320699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.320899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.320911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.321025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.321037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.321261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.321272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.321497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.321509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.321666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.321678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.321914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.321925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.322174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.322186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.322412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.322423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.322599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.322611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.322859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.322870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.322990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.323001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.323166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.966 [2024-06-11 03:55:52.323179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.966 qpair failed and we were unable to recover it. 00:59:10.966 [2024-06-11 03:55:52.323347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.323359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.323600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.323615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.323776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.323787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.323889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.323900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.324143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.324155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.324316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.324328] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.324494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.324506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.324743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.324754] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.325006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.325020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.325207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.325219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.325473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.325484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.325643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.325655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.325900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.325911] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.326079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.326090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.326210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.326222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.326343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.326354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.326531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.326541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.326726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.326737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.326927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.326938] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.327187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.327199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.327382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.327394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.327601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.327612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.327842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.327854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.328028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.328040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.328226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.328237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.328461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.328473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.328721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.328733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.328908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.328919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.329022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.329034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.329210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.329222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.329319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.329330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.329510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.329521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.329716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.329727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.329919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.329930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.330164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.330176] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.330335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.330347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.330511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.330522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.330746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.330757] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.330982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.330994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.331170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.331182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.331429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.331440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.331689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.331702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.331812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.331822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.332004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.332020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.332246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.332258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.332482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.332493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.332692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.332703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.332951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.332962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.333209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.333220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:10.967 [2024-06-11 03:55:52.333462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:10.967 [2024-06-11 03:55:52.333474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:10.967 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.333752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.333764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.333962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.333974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.334207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.334219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.334480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.334492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.334667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.334678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.334906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.334917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.335142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.335153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.335354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.335367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.335617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.335628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.335800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.335813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.336067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.336079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.336200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.336211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.336457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.336469] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.336691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.336702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.336950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.336961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.337129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.337140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.337387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.337398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.247 [2024-06-11 03:55:52.337570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.247 [2024-06-11 03:55:52.337582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.247 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.337834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.337845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.338043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.338055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.338242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.338253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.338438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.338449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.338640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.338651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.338808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.338819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.339072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.339084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.339316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.339327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.339582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.339593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.339818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.339829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.340052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.340064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.340311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.340322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.340510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.340521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.340630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.340644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.340801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.340813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.340905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.340917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.341098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.341110] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.341347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.341358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.341528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.341539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.341747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.341759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.341951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.341962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.342232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.342244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.342412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.342424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.342584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.342596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.342755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.342766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.342923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.342934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.343110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.343122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.343374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.343385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.343578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.343589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.343837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.343848] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.344049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.344060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.344284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.344296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.344546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.344557] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.344810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.344822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.345022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.345034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.345229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.345241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.345514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.345526] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.345738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.345749] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.345907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.345919] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.346094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.346105] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.346357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.346368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.346596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.346607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.346832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.346843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.346997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.347011] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.347252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.347263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.347508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.347520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.248 [2024-06-11 03:55:52.347701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.248 [2024-06-11 03:55:52.347712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.248 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.347887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.347898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.348150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.348162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.348269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.348280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.348439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.348450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.348672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.348683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.348906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.348918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.349166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.349180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.349382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.349393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.349669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.349680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.349851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.349863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.350032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.350044] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.350267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.350278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.350502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.350513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.350622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.350634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.350804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.350816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.351062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.351074] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.351251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.351263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.351448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.351459] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.351618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.351630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.351854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.351866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.351963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.351975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.352152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.352164] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.352347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.352359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.352618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.352629] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.352791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.352803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.353048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.353061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.353175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.353186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.353411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.353422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.353673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.353685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.353941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.353952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.354127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.354138] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.354362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.354373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.354493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.354505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.354672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.354683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.354906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.354917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.355184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.355196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.355423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.355435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.355629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.355640] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.355863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.355875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.356126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.356137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.356312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.356323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.356571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.356583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.356740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.356752] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.356911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.356923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.357193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.357205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.357432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.357443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.357604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.249 [2024-06-11 03:55:52.357615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.249 qpair failed and we were unable to recover it. 00:59:11.249 [2024-06-11 03:55:52.357710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.357721] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.357945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.357957] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.358146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.358157] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.358401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.358412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.358606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.358617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.358790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.358801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.358967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.358978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.359223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.359235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.359396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.359408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.359619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.359632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.359880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.359892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.360161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.360173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.360334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.360346] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.360552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.360563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.360720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.360732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.360954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.360965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.361207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.361219] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.361413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.361425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.361550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.361561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.361807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.361818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.361974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.361985] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.362106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.362118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.362367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.362379] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.362542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.362553] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.362658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.362669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.362851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.362862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.363128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.363141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.363300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.363310] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.363561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.363573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.363696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.363707] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.363934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.363946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.364074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.364086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.364246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.364258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.364506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.364517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.364717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.364728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.364978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.364990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.365191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.365202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.365391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.365403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.365594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.365606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.365845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.365857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.365963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.365975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.366064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.366076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.366247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.366259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.366507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.366518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.366713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.366724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.366828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.366839] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.367090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.367101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.367360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.250 [2024-06-11 03:55:52.367371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.250 qpair failed and we were unable to recover it. 00:59:11.250 [2024-06-11 03:55:52.367473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.367484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.367738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.367750] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.367982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.367993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.368247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.368259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.368511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.368523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.368700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.368711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.368957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.368969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.369127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.369139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.369334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.369345] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.369569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.369580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.369847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.369859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.370027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.370039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.370288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.370299] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.370522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.370533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.370790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.370801] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.370978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.370989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.371183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.371194] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.371445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.371457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.371696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.371710] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.371878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.371891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.372133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.372145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.372316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.372327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.372573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.372584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.372756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.372767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.372992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.373003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.373234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.373245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.373423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.373434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.373683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.373694] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.373862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.373874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.374122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.374134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.374386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.374397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.374660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.374671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.374922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.374933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.375160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.375172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.375341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.375352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.375578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.375590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.375845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.375856] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.376047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.376058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.251 [2024-06-11 03:55:52.376221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.251 [2024-06-11 03:55:52.376232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.251 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.376349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.376360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.376595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.376607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.376703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.376714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.376984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.376996] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.377246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.377258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.377427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.377438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.377641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.377653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.377873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.377885] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.378136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.378148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.378412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.378423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.378642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.378653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.378771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.378782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.379005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.379020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.379247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.379258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.379549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.379561] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.379666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.379677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.379903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.379915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.380093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.380104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.380263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.380274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.380547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.380560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.380726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.380737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.380967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.380978] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.381136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.381148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.381320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.381332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.381602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.381613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.381737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.381748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.382003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.382017] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.382255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.382266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.382379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.382391] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.382640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.382651] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.382879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.382891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.383004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.383018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.383185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.383196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.383294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.383305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.383536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.383547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.383771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.383782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.383896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.383907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.384158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.384171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.384425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.384437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.384558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.384569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.384814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.384825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.385022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.385034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.385224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.385235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.385471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.385482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.385603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.385615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.385728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.385739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.385990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.386002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.252 qpair failed and we were unable to recover it. 00:59:11.252 [2024-06-11 03:55:52.386178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.252 [2024-06-11 03:55:52.386189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.386383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.386394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.386578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.386589] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.386815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.386826] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.387083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.387095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.387198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.387209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.387465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.387476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.387698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.387709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.387958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.387969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.388167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.388178] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.388350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.388362] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.388609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.388621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.388781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.388795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.388965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.388976] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.389198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.389209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.389441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.389452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.389680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.389692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.389932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.389944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.390129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.390141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.390363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.390374] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.390596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.390608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.390779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.390791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.390947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.390959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.391236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.391247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.391415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.391427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.391679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.391691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.391888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.391899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.392088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.392099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.392343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.392354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.392623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.392634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.392872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.392884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.393061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.393072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.393264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.393275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.393518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.393530] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.393774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.393785] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.393988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.393999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.394227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.394239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.394408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.394420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.394591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.394602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.394785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.394796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.394973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.394984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.395233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.395245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.395440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.395451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.395627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.395638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.395889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.395900] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.396117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.396129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.396390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.396401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.396594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.396606] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.253 qpair failed and we were unable to recover it. 00:59:11.253 [2024-06-11 03:55:52.396699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.253 [2024-06-11 03:55:52.396711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.396871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.396883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.397074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.397086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.397285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.397296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.397526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.397539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.397698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.397711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.397937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.397948] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.398178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.398190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.398386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.398398] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.398551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.398563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.398786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.398798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.398974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.398986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.399213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.399225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.399443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.399454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.399703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.399715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.399902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.399914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.400080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.400092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.400257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.400269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.400440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.400454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.400651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.400664] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.400856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.400868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.401066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.401079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.401204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.401216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.401419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.401431] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.401663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.401675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.401950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.401962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.402132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.402144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.402395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.402407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.402589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.402601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.402846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.402858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.403108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.403120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.403281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.403292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.403497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.403508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.403700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.403713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.403871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.403882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.404061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.404073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.404233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.404246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.404491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.404503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.404696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.404707] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.404896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.404907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.405037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.405048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.405268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.405280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.405528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.405540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.254 qpair failed and we were unable to recover it. 00:59:11.254 [2024-06-11 03:55:52.405738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.254 [2024-06-11 03:55:52.405751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.405942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.405955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.406173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.406185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.406414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.406425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.406665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.406677] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.406870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.406881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.407139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.407151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.407320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.407331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.407495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.407508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.407739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.407751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.407999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.408014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.408175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.408186] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.408381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.408393] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.408528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.408539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.408732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.408743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.408842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.408854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.409100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.409112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.409285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.409296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.409488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.409500] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.409668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.409679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.409901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.409912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.410150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.410161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.410436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.410448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.410607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.410619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.410862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.410874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.411119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.411139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.411337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.411348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.411451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.411463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.411642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.411653] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.411901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.411912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.412160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.412172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.412284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.412295] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.412520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.412532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.412800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.412811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.412998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.413018] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.413128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.413139] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.413302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.413313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.413536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.413546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.413705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.413717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.413887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.413898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.414008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.414024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.414249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.414262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.414534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.414547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.414721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.414732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.414978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.414991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.415223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.415236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.415433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.255 [2024-06-11 03:55:52.415445] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.255 qpair failed and we were unable to recover it. 00:59:11.255 [2024-06-11 03:55:52.415602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.415614] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.415783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.415795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.415898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.415909] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.416019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.416030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.416233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.416244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.416511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.416522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.416733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.416744] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.416909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.416921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.417111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.417123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.417314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.417325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.417503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.417513] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.417681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.417692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.417870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.417882] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.418046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.418059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.418174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.418185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.418289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.418300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.418395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.418407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.418578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.418590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.418779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.418791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.419057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.419069] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.419176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.419187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.419350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.419361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.419531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.419542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.419703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.419714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.419938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.419949] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.420120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.420131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.420289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.420300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.420476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.420488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.420576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.420588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.420698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.420709] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.420867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.420879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.421052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.421064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.421228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.421239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.421492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.421504] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.421669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.421682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.421851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.421863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.422052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.422063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.422237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.422249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.422351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.422363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.422590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.422601] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.422707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.422718] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.422879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.422890] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.423048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.423060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.423243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.423254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.423374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.423386] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.423493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.423505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.423616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.256 [2024-06-11 03:55:52.423627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.256 qpair failed and we were unable to recover it. 00:59:11.256 [2024-06-11 03:55:52.423816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.423828] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.424027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.424039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.424208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.424220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.424383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.424395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.424557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.424569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.424689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.424701] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.424818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.424830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.424929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.424941] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.425106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.425117] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.425290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.425301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.425460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.425471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.425658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.425668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.425734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.425746] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.425977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.425990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.426068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.426079] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.426186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.426198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.426306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.426318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.426487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.426498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.426616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.426628] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.426869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.426880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.427039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.427050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.427140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.427151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.427276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.427287] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.427392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.427403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.427576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.427588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.427821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.427831] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.427934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.427945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.428041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.428058] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.428171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.428182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.428281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.428293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.428370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.428382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.428487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.428499] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.428618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.428630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.428719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.428730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.428851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.428863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.429036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.429047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.429261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.429272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.429444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.429455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.429632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.429644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.429762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.429773] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.429997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.430011] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.430183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.430195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.430377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.430389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.430563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.430575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.430798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.430811] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.430970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.430982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.431101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.431113] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.431278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.257 [2024-06-11 03:55:52.431290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.257 qpair failed and we were unable to recover it. 00:59:11.257 [2024-06-11 03:55:52.431536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.431548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.431739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.431751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.431913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.431925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.432042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.432054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.432154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.432165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.432388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.432400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.432575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.432586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.432688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.432699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.432808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.432819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.432910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.432921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.433088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.433100] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.433354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.433367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.433525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.433536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.433787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.433798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.433966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.433979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.434218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.434230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.434318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.434330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.434441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.434453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.434610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.434621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.434848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.434861] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.434936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.434947] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.435173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.435185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.435442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.435454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.435568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.435579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.435760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.435771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.435835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.435846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.436026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.436037] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.436203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.436215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.436320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.436331] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.436559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.436571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.436750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.436761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.436951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.436962] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.437063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.437075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.437267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.437278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.437384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.437395] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.437571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.437582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.437835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.437847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.437968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.437979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.438085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.438097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.438269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.438280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.438454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.438466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.438636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.258 [2024-06-11 03:55:52.438647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.258 qpair failed and we were unable to recover it. 00:59:11.258 [2024-06-11 03:55:52.438761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.438772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.438994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.439005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.439238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.439249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.439353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.439365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.439474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.439486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.439591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.439602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.439770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.439782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.440031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.440043] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.440161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.440172] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.440348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.440359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.440471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.440482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.440586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.440597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.440791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.440803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.440989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.441001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.441168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.441179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.441341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.441352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.441528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.441539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.441709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.441722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.441947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.441959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.442184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.442196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.442352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.442363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.442523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.442534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.442633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.442644] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.442826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.442837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.442943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.442954] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.443176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.443187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.443278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.443289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.443485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.443497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.443656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.443667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.443792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.443803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.443891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.443903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.444074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.444087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.444249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.444260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.444352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.444363] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.444538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.444549] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.444778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.444789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.444883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.444895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.445055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.445067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.445246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.445257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.445485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.445496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.445586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.445597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.445755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.445767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.445870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.445881] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.446050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.446061] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.446225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.446236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.446401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.446412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.446584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.446595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.446689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.259 [2024-06-11 03:55:52.446700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.259 qpair failed and we were unable to recover it. 00:59:11.259 [2024-06-11 03:55:52.446781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.446793] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.446910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.446923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.447091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.447103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.447270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.447282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.447403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.447415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.447523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.447534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.447801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.447813] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.447915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.447927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.448036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.448048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.448202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.448215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.448385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.448396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.448559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.448570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.448688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.448699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.448878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.448889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.449004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.449020] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.449180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.449191] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.449348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.449359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.449543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.449555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.449779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.449790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.449956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.449968] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.450128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.450140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.450331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.450342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.450454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.450466] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.450623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.450634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.450749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.450761] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.450933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.450944] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.451066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.451077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.451205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.451217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.451328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.451339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.451439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.451451] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.451625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.451637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.451809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.451821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.452014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.452026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.452200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.452211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.452336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.452348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.452572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.452583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.452812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.452824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.452996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.453008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.453253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.453265] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.453362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.453373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.453613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.453630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.453854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.453865] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.454040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.454051] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.454218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.454229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.454453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.454464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.454571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.454582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.454755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.454767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.260 [2024-06-11 03:55:52.454934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.260 [2024-06-11 03:55:52.454945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.260 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.455170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.455181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.455337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.455351] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.455527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.455538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.455654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.455665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.455778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.455789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.455950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.455961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.456059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.456070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.456186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.456197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.456366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.456377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.456536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.456547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.456661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.456672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.456920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.456931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.457091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.457103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.457260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.457271] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.457393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.457405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.457509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.457520] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.457702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.457713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.457887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.457899] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.457994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.458005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.458112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.458123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.458282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.458293] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.458537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.458548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.458752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.458763] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.458968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.458980] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.459084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.459096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.459397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.459408] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.459513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.459524] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.459684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.459695] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.459867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.459880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.460038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.460050] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.460305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.460316] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.460444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.460455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.460703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.460714] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.460905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.460916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.461076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.461088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.461259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.461270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.461398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.461409] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.461568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.461579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.461701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.461712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.461826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.461837] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.462012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.462024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.261 qpair failed and we were unable to recover it. 00:59:11.261 [2024-06-11 03:55:52.462130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.261 [2024-06-11 03:55:52.462141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.462227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.462238] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.462414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.462425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.462594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.462605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.462722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.462733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.462907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.462918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.463096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.463107] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.463293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.463304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.463418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.463429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.463586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.463598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.463661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.463672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.463843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.463854] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.464013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.464024] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.464194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.464206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.464379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.464390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.464499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.464510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.464686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.464698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.464799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.464810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.465003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.465022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.465130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.465143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.465312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.465325] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.465500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.465511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.465668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.465680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.465854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.465866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.466023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.466036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.466233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.466244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.466429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.466440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.466552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.466565] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.466732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.466743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.466910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.466921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.467150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.467162] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.467271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.467283] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.467444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.467457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.467626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.467637] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.467813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.467825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.467987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.467999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.468118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.468130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.468306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.468318] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.468584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.468596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.468831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.468843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.469020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.469032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.262 [2024-06-11 03:55:52.469256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.262 [2024-06-11 03:55:52.469268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.262 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.469378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.469390] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.469577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.469588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.469744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.469756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.469868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.469880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.470057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.470070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.470219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.470231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.470415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.470426] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.470536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.470548] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.470713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.470724] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.470890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.470901] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.471063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.471076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.471186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.471198] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.471376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.471388] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.471631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.471643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.471867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.471878] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.471965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.471977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.472085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.472096] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.472343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.472354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.472580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.472591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.472788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.472800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.472990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.473001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.473106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.473118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.473298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.473309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.473414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.473425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.473521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.473533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.473702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.473716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.473813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.473825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.473999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.474014] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.474185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.474197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.474356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.474367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.474592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.474604] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.474719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.474731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.474849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.474860] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.475083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.475095] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.475254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.475266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.475432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.475444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.475615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.475626] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.475718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.475730] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.475939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.475951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.476037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.476049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.476309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.476321] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.476442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.476453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.476626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.476638] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.263 [2024-06-11 03:55:52.476741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.263 [2024-06-11 03:55:52.476753] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.263 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.476915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.476926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.477107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.477119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.477234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.477245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.477338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.477349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.477438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.477448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.477540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.477552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.477673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.477685] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.477891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.477903] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.478002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.478017] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.478189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.478202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.478311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.478322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.478505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.478516] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.478635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.478647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.478760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.478772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.478883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.478894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.478988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.478999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.479125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.479145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.479274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.479290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.479409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.479425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.479543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.479560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.479663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.479679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.479805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.479824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.479934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.479951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.480071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.480088] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.480183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.480197] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.480311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.480322] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.480413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.480425] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.480532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.480545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.480702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.480713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.480913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.480924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.481033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.481045] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.481220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.481231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.481392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.481404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.481569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.481581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.481694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.481705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.481828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.481840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.481949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.481960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.482059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.482072] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.482175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.482187] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.482343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.482355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.482496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.482507] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.482599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.482612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.482714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.482725] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.482869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.482880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.482976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.482986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.483100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.483112] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.264 [2024-06-11 03:55:52.483300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.264 [2024-06-11 03:55:52.483311] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.264 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.483417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.483428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.483680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.483692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.483845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.483857] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.484020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.484031] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.484213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.484225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.484315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.484327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.484438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.484449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.484538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.484550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.484730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.484742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.484868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.484880] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.485052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.485064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.485210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.485222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.485333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.485344] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.485568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.485579] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.485746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.485760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.485982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.485993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.486294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.486305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.486388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.486399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.486511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.486522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.486767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.486778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.486906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.486917] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.487169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.487180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.487286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.487298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.487405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.487416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.487575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.487587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.487772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.487784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.487883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.487894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.487993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.488004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.488178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.488189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.488292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.488304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.488412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.488423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.488656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.488668] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.488769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.488782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.488896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.488907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.489097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.489109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.489284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.489296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.489412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.489423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.489601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.489612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.489717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.489728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.489895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.489907] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.490070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.490082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.490184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.490195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.490303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.490315] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.265 [2024-06-11 03:55:52.490495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.265 [2024-06-11 03:55:52.490506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.265 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.490618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.490630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.490739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.490751] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.490928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.490940] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.491056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.491068] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.491241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.491253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.491357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.491370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.491486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.491498] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.491613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.491625] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.491777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.491789] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.491914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.491925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.492033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.492047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.492285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.492297] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.492402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.492413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.492576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.492588] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.492754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.492766] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.492877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.492888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.493016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.493028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.493118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.493130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.493289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.493300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.493473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.493485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.493575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.493587] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.493686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.493699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.493808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.493821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.493979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.493991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.494246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.494257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.494416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.494427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.494584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.494596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.494710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.494722] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.494954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.494965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.495075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.495086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.495195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.495207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.495312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.495323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.495418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.495429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.495591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.495602] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.495717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.495728] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.495903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.495914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.496023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.496035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.496171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.496183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.496267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.496279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.496452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.266 [2024-06-11 03:55:52.496463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.266 qpair failed and we were unable to recover it. 00:59:11.266 [2024-06-11 03:55:52.496559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.496571] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.496668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.496680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.496852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.496863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.497037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.497048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.497233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.497244] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.497370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.497381] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.497498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.497509] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.497676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.497687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.497858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.497869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.498055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.498067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.498232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.498247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.498352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.498364] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.498460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.498471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.498583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.498596] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.498786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.498798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.498975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.498986] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.499147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.499159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.499332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.499343] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.499465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.499477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.499727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.499738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.499805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.499816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.499922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.499933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.500086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.500097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.500213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.500224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.500329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.500340] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.500503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.500514] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.500671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.500682] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.500861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.500873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.500969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.500981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.501089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.501101] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.501215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.501226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.501336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.501348] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.501445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.501457] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.501619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.501630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.501785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.501796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.501972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.501983] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.502147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.502159] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.502315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.502327] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.502482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.502494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.502694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.502706] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.502825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.502836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.503061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.503073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.503257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.503269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.503437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.267 [2024-06-11 03:55:52.503449] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.267 qpair failed and we were unable to recover it. 00:59:11.267 [2024-06-11 03:55:52.503610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.503622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.503746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.503758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.503881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.503893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.503976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.503988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.504108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.504119] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.504214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.504226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.504322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.504334] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.504495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.504506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.504664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.504676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.504904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.504916] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.505037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.505049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.505273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.505285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.505390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.505402] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.505574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.505586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.505840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.505852] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.506014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.506025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.506128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.506140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.506361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.506373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.506476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.506487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.506672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.506683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.506862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.506873] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.507039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.507052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.507129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.507141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.507294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.507305] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.507476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.507487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.507599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.507611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.507774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.507786] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.507880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.507893] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.508004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.508023] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.508248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.508260] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.508423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.508434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.508610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.508622] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.508787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.508800] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.508932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.508951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.509223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.509240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.509375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.509392] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.509650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.509667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.509883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.509898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.510138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.510155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.510286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.510303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.510501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.510517] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.510626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.510639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.510866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.510877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.511125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.511137] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.511245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.511257] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.511364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.511376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.511579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.511594] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.511779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.511792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.268 qpair failed and we were unable to recover it. 00:59:11.268 [2024-06-11 03:55:52.511883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.268 [2024-06-11 03:55:52.511895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.512003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.512019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.512263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.512275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.512431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.512444] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.512631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.512643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.512745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.512764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.512998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.513013] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.513193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.513205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.513312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.513323] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.513494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.513506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.513668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.513680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.513784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.513795] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.513967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.513979] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.514169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.514181] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.514424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.514436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.514618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.514630] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.514715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.514726] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.514837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.514850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.515021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.515033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.515127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.515140] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.515233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.515245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.515418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.515429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.515544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.515556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.515780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.515792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.515976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.515988] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.516228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.516247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.516423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.516440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.516563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.516580] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.516766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.516783] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.516954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.516971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.517150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.517167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.517344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.517361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.517576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.517592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.517781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.517798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.517986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.518003] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.518125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.518143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.518256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.518272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.518356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.518370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.518534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.518547] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.518727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.518739] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.518913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.518925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.519042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.519054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.519159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.519171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.519353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.519365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.519478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.269 [2024-06-11 03:55:52.519491] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.269 qpair failed and we were unable to recover it. 00:59:11.269 [2024-06-11 03:55:52.519650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.519662] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.519838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.519850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.520019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.520033] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.520158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.520170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.520340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.520352] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.520460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.520473] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.520565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.520578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.520745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.520758] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.521051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.521063] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.521170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.521182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.521360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.521372] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.521551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.521563] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.521667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.521679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.521847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.521859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.521969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.521981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.522142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.522154] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.522263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.522275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.522460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.522472] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.522649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.522661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.522843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.522855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.523097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.523115] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.523228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.523246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.523365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.523382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.523639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.523655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.523859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.523874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.523989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.524005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.524125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.524142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.524344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.524361] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.524575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.524591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.524767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.524784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.524904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.524921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.525028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.525046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.525220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.525233] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.525478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.525492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.525663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.525675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.525782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.525794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.525953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.525965] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.526132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.526144] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.526307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.526320] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.526548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.526560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.526791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.526803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.526963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.526975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.527155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.527166] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.527270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.527282] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.270 [2024-06-11 03:55:52.527421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.270 [2024-06-11 03:55:52.527433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.270 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.527601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.527613] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.527762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.527774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.527958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.527971] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.528069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.528081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.528261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.528273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.528395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.528407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.528603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.528615] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.528693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.528705] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.528867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.528879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.529040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.529052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.529229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.529241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.529348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.529359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.529426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.529437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.529598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.529610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.529770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.529782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.529951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.529969] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.530148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.530168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.530283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.530301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.530396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.530412] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.530620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.530636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.530753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.530770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.530946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.530963] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.531068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.531086] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b8000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.531324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.531337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.531443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.531455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.531637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.531649] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.531791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.531803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.531982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.531994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.532228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.532243] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.532424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.532436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.532537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.532554] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.532662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.532674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.532902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.532914] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.533082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.533094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.533223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.533235] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.533372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.533383] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.533557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.533569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.533673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.533686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.533784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.533796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.533900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.533912] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.533988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.534001] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.534212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.534224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.534314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.534326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.534424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.534436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.534604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.534616] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.534846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.534859] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.271 [2024-06-11 03:55:52.534981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.271 [2024-06-11 03:55:52.534993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.271 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.535156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.535168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.535416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.535429] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.535606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.535617] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.535792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.535805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.535975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.535987] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.536228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.536241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.536412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.536424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.536624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.536636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.536824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.536836] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.536969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.536982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.537169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.537182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.537344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.537356] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.537556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.537568] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.537629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.537641] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.537826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.537838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.538017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.538029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.538130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.538142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.538324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.538336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.538498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.538510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.538570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.538581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.538784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.538796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.538914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.538928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.539071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.539083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.539187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.539199] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.539358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.539370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.539543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.539555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.539662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.539674] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.539865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.539877] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.539979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.539991] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.540238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.540251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.540355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.540367] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.540498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.540511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.540675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.540687] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.540889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.540902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.541017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.541030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.541198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.541210] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.541303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.541315] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.541427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.541440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.541551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.541562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.541648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.541661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.541831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.541844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.541945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.541958] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.542064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.542077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.542224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.542236] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.542303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.542314] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.542406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.542419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.542664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.542676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.542934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.272 [2024-06-11 03:55:52.542946] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.272 qpair failed and we were unable to recover it. 00:59:11.272 [2024-06-11 03:55:52.543121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.543135] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.543252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.543264] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.543383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.543396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.543501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.543514] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.543753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.543765] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.543859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.543870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.544038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.544059] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.544152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.544165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.544246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.544258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.544343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.544355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.544531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.544543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.544648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.544659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.544821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.544834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.544982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.544995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.545182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.545195] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.545314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.545326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.545507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.545519] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.545687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.545699] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.545803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.545816] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.545977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.545989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.546096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.546108] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.546211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.546222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.546402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.546415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.546534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.546546] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.546725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.546738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.546917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.546929] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.547086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.547099] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.547225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.547237] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.547482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.547495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.547720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.547731] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.547846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.547858] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.548014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.548026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.548261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.548274] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.548524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.548537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.548647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.548659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.548830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.548842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.549069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.549081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.549254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.549266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.549435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.549447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.549557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.549569] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.549723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.549737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.549856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.273 [2024-06-11 03:55:52.549868] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.273 qpair failed and we were unable to recover it. 00:59:11.273 [2024-06-11 03:55:52.550040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.550053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.550116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.550129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.550355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.550368] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.550467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.550480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.550705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.550717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.550811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.550823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.550993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.551005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.551110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.551122] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.551280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.551292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.551538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.551550] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.551631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.551642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.551880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.551892] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.552146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.552167] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.552347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.552359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.552467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.552480] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.552704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.552715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.552819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.552830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.552999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.553015] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.553111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.553123] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.553211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.553222] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.553414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.553427] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.553520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.553532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.553731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.553743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.553871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.553884] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.553988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.553999] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.554189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.554201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.554448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.554460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.554565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.554577] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.554656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.554667] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.554843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.554855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.555042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.555053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.555261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.555276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.555436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.555448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.555621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.555633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.555736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.555748] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.555871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.555883] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.555980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.555992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.556250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.556262] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.556434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.556448] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.556607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.556619] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.556792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.556803] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.556910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.556922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.557142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.557155] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.557235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.557246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.557348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.557359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.557527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.557539] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.557764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.557777] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.558002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.274 [2024-06-11 03:55:52.558019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.274 qpair failed and we were unable to recover it. 00:59:11.274 [2024-06-11 03:55:52.558266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.558278] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.558392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.558404] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.558559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.558570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.558807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.558819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.559071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.559083] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.559178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.559189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.559312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.559324] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.559479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.559492] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.559659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.559670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.559840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.559853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.559968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.559981] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.560138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.560151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.560277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.560289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.560516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.560528] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.560624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.560636] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.560810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.560829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.560933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.560945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.561188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.561201] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.561325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.561337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.561505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.561518] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.561685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.561697] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.561896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.561908] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.562079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.562092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.562226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.562239] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.562346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.562359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.562580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.562592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.562759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.562772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.563040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.563054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.563277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.563290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.563389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.563401] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.563524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.563538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.563762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.563774] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.563894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.563906] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.564130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.564142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.564304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.564316] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.564413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.564424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.564601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.564612] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.564716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.564729] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.564978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.564990] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.565173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.565188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.565357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.565369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.565474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.565486] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.565652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.565665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.565939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.565951] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.566139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.566151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.566234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.566246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.566470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.566481] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.275 qpair failed and we were unable to recover it. 00:59:11.275 [2024-06-11 03:55:52.566659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.275 [2024-06-11 03:55:52.566671] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.566837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.566850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.567022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.567034] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.567206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.567217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.567323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.567335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.567579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.567591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.567696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.567708] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.567822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.567835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.568014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.568027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.568246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.568258] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.568420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.568432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.568588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.568600] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.568755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.568767] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.568858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.568869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.568980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.568992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.569119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.569131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.569353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.569365] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.569540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.569552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.569666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.569678] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.569851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.569863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.570090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.570103] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.570199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.570211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.570337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.570349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.570581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.570595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.570767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.570780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.570889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.570901] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.571068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.571081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.571278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.571290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.571407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.571419] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.571608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.571621] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.571721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.571733] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.571828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.571840] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.572015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.572028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.572280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.572292] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.572413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.572424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.572673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.572684] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.572879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.572891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.573139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.573151] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.573267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.573279] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.573391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.573403] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.573587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.573599] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.573853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.573866] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.574065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.574077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.574254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.574266] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.574416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.574428] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.574595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.574608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.574851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.574863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.575117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.575130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.276 qpair failed and we were unable to recover it. 00:59:11.276 [2024-06-11 03:55:52.575292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.276 [2024-06-11 03:55:52.575304] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.575411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.575424] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.575652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.575665] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.575918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.575931] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.576087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.576098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.576268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.576280] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.576516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.576527] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.576764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.576775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.576886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.576897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.577094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.577106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.577219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.577230] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.577383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.577394] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.577582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.577593] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.577824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.577834] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.578014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.578026] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.578139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.578153] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.578248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.578259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.578431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.578443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.578544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.578555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.578808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.578819] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.579018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.579030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.579257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.579269] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.579393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.579405] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.579640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.579652] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.579763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.579775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.579934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.579945] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.580119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.580131] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.580425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.580436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.580612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.580623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.580797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.580809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.580887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.580898] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.581021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.581032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.581205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.581216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.581374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.581385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.581500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.581511] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.581584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.581595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.581765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.581776] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.581941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.581952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.582129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.582141] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.582261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.582272] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.582435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.582447] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.582532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.277 [2024-06-11 03:55:52.582543] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.277 qpair failed and we were unable to recover it. 00:59:11.277 [2024-06-11 03:55:52.582658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.582670] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.582767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.582778] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.582955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.582966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.583131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.583142] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.583301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.583312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.583406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.583417] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.583503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.583514] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.583680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.583691] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.583812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.583823] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.584006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.584021] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.584137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.584149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.584261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.584273] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.584437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.584450] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.584563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.584576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.584680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.584692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.584780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.584791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.584972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.584984] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.585161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.585173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.585247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.585259] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.585347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.585358] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.585547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.585558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.585725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.585737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.585831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.585842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.586001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.586016] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.586118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.586129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.586286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.586298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.586408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.586420] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.586620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.586632] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.586818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.586829] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.587016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.587027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.587136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.587148] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.587258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.587270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.587367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.587378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.587496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.587507] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.587757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.587768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.588028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.588040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.588239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.588251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.588349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.588359] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.588581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.588592] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.588839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.588850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.589109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.589121] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.589343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.589355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.589529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.589540] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.589631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.589642] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.589748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.589759] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.589916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.589927] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.590103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.590116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.590218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.590229] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.278 qpair failed and we were unable to recover it. 00:59:11.278 [2024-06-11 03:55:52.590319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.278 [2024-06-11 03:55:52.590330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.590499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.590510] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.590705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.590716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.590867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.590879] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.591054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.591066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.591234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.591247] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.591358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.591369] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.591432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.591443] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.591562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.591573] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.591668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.591679] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.591769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.591781] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.592027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.592038] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.592197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.592209] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.592324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.592335] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.592440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.592452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.592725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.592736] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.592840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.592851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.593026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.593038] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.593146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.593158] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.593264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.593275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.593442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.593453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.593545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.593556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.593730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.593741] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.593966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.593977] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.594169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.594180] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.594346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.594357] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.594459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.594471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.594566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.594576] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.594733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.594745] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.594858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.594870] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.594964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.594975] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.595082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.595093] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.595319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.595330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.595434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.595446] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.595533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.595545] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.595665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.595676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.595833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.595844] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.596014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.596025] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.596159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.596170] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.596342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.596354] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.596452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.596464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.596644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.596655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.596756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.596768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.597018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.597030] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.597237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.597249] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.597356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.597370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.597595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.597607] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.597780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.597791] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.279 qpair failed and we were unable to recover it. 00:59:11.279 [2024-06-11 03:55:52.597912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.279 [2024-06-11 03:55:52.597923] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.598172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.598183] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.598360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.598371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.598483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.598494] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.598651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.598663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.598768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.598780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.599028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.599040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.599213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.599225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.599396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.599406] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.599530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.599541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.599700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.599711] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.599876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.599887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.600064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.600075] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.600178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.600189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.600412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.600423] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.600531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.600542] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.600732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.600743] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.600971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.600982] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.601094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.601106] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.601200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.601211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.601330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.601342] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.601452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.601463] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.601555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.601567] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.601725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.601737] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.601891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.601902] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.602205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.602217] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.602458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.602470] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.602571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.602583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.602770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.602782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.602961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.602973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.603153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.603165] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.603336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.603347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.603524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.603535] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.603689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.603700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.603857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.603869] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.604135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.604147] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.604348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.604360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.604523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.604536] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.604757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.604770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.605023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.605035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.605205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.605216] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.605325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.605336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.605498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.280 [2024-06-11 03:55:52.605512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.280 qpair failed and we were unable to recover it. 00:59:11.280 [2024-06-11 03:55:52.605622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.281 [2024-06-11 03:55:52.605633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.281 qpair failed and we were unable to recover it. 00:59:11.281 [2024-06-11 03:55:52.605790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.281 [2024-06-11 03:55:52.605802] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.281 qpair failed and we were unable to recover it. 00:59:11.281 [2024-06-11 03:55:52.605978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.281 [2024-06-11 03:55:52.605989] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.281 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.606277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.606289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.606562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.606574] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.606821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.606832] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.607097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.607109] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.607269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.607281] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.607474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.607485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.607643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.607655] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.607813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.607824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.608090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.608102] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.608220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.608232] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.608403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.608414] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.608662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.608673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.608839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.608850] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.609071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.609082] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.609319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.609330] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.609521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.609533] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.609775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.609787] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.610036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.610048] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:59:11.282 [2024-06-11 03:55:52.610292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.610306] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.610466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.610478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@863 -- # return 0 00:59:11.282 [2024-06-11 03:55:52.610727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.610738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:59:11.282 [2024-06-11 03:55:52.610961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.610973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:59:11.282 [2024-06-11 03:55:52.611232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.611245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:11.282 [2024-06-11 03:55:52.611421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.611434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.611622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.611634] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.611908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.611922] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.612167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.612177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.612339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.612349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.612521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.612532] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.612788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.612798] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.613044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.613054] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.613309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.613319] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.613512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.613522] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.613707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.613716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.613924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.613933] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.614194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.614204] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.614365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.614376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.614485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.614495] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.614706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.614715] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.614914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.614924] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.615193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.615205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.615442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.615452] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.615689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.615703] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.615911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.615921] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.616081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.616090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.282 [2024-06-11 03:55:52.616360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.282 [2024-06-11 03:55:52.616370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.282 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.616479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.616489] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.616613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.616623] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.616746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.616756] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.616957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.616967] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.617133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.617143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.617425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.617435] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.617697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.617707] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.617885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.617894] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.618089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.618098] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.618286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.618296] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.618396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.618407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.618629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.618639] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.618745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.618755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.618931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.618942] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.619110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.619120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.619366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.619376] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.619601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.619611] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.619881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.619891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.620080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.620090] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.620279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.620289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.620412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.620421] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.620582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.620591] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.620704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.620713] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.620885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.620895] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.621107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.621118] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.621290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.621300] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.621467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.621477] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.621702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.621712] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.621942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.621952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.622114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.622125] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.622364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.622373] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.622528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.622538] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.622786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.622796] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.623049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.623060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.623229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.623240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.623427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.623439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.623620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.623631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.623810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.623820] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.624043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.624053] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.624214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.624224] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.624333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.624343] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.624580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.624590] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.624766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.624775] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.625012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.625022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.625202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.625211] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.625477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.625488] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.283 [2024-06-11 03:55:52.625595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.283 [2024-06-11 03:55:52.625605] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.283 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.625760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.625770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.625944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.625955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.626135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.626145] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.626324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.626336] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.626427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.626437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.626653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.626663] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.626833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.626843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.627039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.627049] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.627141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.627150] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.627362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.627371] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.627548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.627558] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.627732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.627742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.627964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.627974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.628083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.628092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.628292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.628301] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.628474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.628484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.628728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.628738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.628985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.628995] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.284 [2024-06-11 03:55:52.629178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.284 [2024-06-11 03:55:52.629188] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.284 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.629412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.629422] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.629550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.629560] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.629782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.629792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.630016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.630027] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.630304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.630313] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.630566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.630578] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.630813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.630824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.631050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.631062] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.631241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.631251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.631468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.631478] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.631649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.631659] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.631840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.631851] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.632049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.632060] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.632298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.632309] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.632562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.632572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.632687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.632698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.632876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.632887] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.633045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.633055] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.633251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.633261] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.633486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.633496] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.633670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.633680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.633904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.633913] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.634036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.634046] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.634217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.634227] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.634450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.634461] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.634693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.634702] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.634814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.634824] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.635025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.635036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.635179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.635189] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.635367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.549 [2024-06-11 03:55:52.635377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.549 qpair failed and we were unable to recover it. 00:59:11.549 [2024-06-11 03:55:52.635492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.635502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.635616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.635627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.635809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.635821] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.636059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.636070] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.636194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.636203] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.636400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.636410] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.636572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.636583] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.636761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.636771] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.636942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.636952] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.637185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.637196] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.637386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.637396] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.637598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.637608] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.637805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.637815] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.638076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.638087] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.638271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.638281] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.638462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.638472] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.638586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.638595] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.638758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.638768] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.638996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.639006] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.639232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.639241] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.639360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.639370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.639482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.639493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.639732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.639742] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.639921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.639930] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.640194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.640205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.640429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.640438] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.640652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.640661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.640853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.640862] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.641047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.641057] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.641179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.641190] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.641367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.641377] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.641600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.641610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.641783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.641792] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.642067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.642077] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.642238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.642250] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.642430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.642440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.642728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.642738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.550 [2024-06-11 03:55:52.642916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.550 [2024-06-11 03:55:52.642925] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.550 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.643029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.643039] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.643221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.643231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.643403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.643415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.643617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.643627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.643877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.643888] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.644158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.644168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.644281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.644290] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.644461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.644471] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.644587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.644598] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.644780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.644790] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.645037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.645047] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.645172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.645182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.645316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.645326] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.645495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.645505] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.645674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.645683] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.645951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.645961] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.646139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.646149] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.646257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.646268] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.646391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.646400] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.646572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.646582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.646779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.646788] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.646963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.646973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:59:11.551 [2024-06-11 03:55:52.647152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.647168] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.647266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.647275] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.647471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:59:11.551 [2024-06-11 03:55:52.647482] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.647690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.647700] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:11.551 [2024-06-11 03:55:52.647925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.647937] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:11.551 [2024-06-11 03:55:52.648188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.648200] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.648319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.648329] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.648601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.648610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.648797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.648807] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.648982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.648993] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.649230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.649240] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.649487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.649497] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.649622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.649633] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.649816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.551 [2024-06-11 03:55:52.649825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.551 qpair failed and we were unable to recover it. 00:59:11.551 [2024-06-11 03:55:52.650006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.650019] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.650254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.650263] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.650369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.650378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.650513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.650523] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.650694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.650704] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.650867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.650876] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.651101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.651111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.651219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.651228] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.651338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.651347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.651512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.651521] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.651809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.651818] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.652065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.652076] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.652322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.652332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.652451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.652460] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.652665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.652675] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.652909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.652918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.653163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.653173] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.653354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.653364] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.653521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.653531] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.653824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.653835] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.654063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.654073] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.654242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.654252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.654428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.654439] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.654617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.654627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.654864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.654874] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.655096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.655129] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.655273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.655289] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.655477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.655493] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.655674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.655689] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.655951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.655966] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.656230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.656246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.656421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.656437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.656671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.656686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.656944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.656960] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.657237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.657253] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.657518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.657534] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.657765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.657780] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb62e70 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.657962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.657974] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.658209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.552 [2024-06-11 03:55:52.658220] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.552 qpair failed and we were unable to recover it. 00:59:11.552 [2024-06-11 03:55:52.658475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.658485] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.658707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.658716] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.658918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.658928] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.659087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.659097] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.659345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.659355] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.659552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.659562] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.659750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.659760] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.660053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.660064] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.660241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.660251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.660425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.660434] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.660621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.660631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.660915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.660926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.661041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.661052] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.661236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.661246] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.661405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.661415] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.661656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.661666] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.661905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.661915] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.662106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.662116] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.662365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.662375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.662561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.662570] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.662864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.662875] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.663110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.663120] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.663368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.663378] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.663545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.663555] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.663801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.663810] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.663992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.664002] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.664229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.664242] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.664466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.664476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.664651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.664661] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.664832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.664842] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.665046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.553 [2024-06-11 03:55:52.665056] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.553 qpair failed and we were unable to recover it. 00:59:11.553 [2024-06-11 03:55:52.665305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.665315] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.665499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.665508] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.665796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.665805] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.665984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.665994] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.666235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.666245] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.666422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.666432] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 Malloc0 00:59:11.554 [2024-06-11 03:55:52.666542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.666552] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.666663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.666673] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.666908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.666918] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.667172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.667182] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:11.554 [2024-06-11 03:55:52.667302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.667312] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.667474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.667484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:59:11.554 [2024-06-11 03:55:52.667725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.667735] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:11.554 [2024-06-11 03:55:52.667981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.667992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:11.554 [2024-06-11 03:55:52.668168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.668179] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.668360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.668370] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.668475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.668484] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.668658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.668669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.668844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.668853] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.669025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.669035] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.669203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.669215] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.669389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.669399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.669670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.669680] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.669901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.669910] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.670161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.670171] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.670373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.670382] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.670638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.670647] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.670888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.670897] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.671120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.671130] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.671377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.671387] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.671573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.671582] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.671707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.671717] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.671983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.671992] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.672175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.672185] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.672356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.672366] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.672622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.672631] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.672815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.554 [2024-06-11 03:55:52.672825] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.554 qpair failed and we were unable to recover it. 00:59:11.554 [2024-06-11 03:55:52.672995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.673004] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.673238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.673248] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.673430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.673440] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.673663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.673672] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.673829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.673838] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.674019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.674029] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.674121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:59:11.555 [2024-06-11 03:55:52.674328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.674339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.674561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.674572] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.674820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.674830] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.675057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.675067] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.675292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.675302] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.675532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.675541] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.675648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.675657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.675879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.675889] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.676114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.676124] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.676244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.676254] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.676375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.676385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.676547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.676556] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.676812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.676822] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.677093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.677104] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.677288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.677298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.677428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.677437] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.677667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.677676] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.677881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.677891] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.678167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.678177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.678365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.678375] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.678575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.678584] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.678755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.678764] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.679035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.679045] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.679214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.679226] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.679404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.679413] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:11.555 [2024-06-11 03:55:52.679635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.679645] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.679916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.679926] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:59:11.555 [2024-06-11 03:55:52.680141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.680152] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:11.555 [2024-06-11 03:55:52.680422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.680433] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:11.555 [2024-06-11 03:55:52.680607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.555 [2024-06-11 03:55:52.680618] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.555 qpair failed and we were unable to recover it. 00:59:11.555 [2024-06-11 03:55:52.680844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.680855] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.681026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.681036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.681294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.681303] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.681503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.681512] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.681773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.681782] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.681950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.681959] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.682208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.682218] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.682387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.682397] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.682566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.682575] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.682774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.682784] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.683001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.683013] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.683133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.683143] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.683259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.683270] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.683445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.683455] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.683571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.683581] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.683837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.683847] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.684100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.684111] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.684288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.684298] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.684495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.684506] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.684681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.684692] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.684945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.684955] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.685151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.685161] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.685339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.685349] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.685599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.685609] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.685857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.685867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.685984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.685997] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.686256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.686267] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.686389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.686399] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.686658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.686669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.686836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.686845] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.687074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.687084] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.687242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.687252] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.687491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.687502] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.687728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.687738] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.688012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.688022] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.688266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.688276] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.688476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.688487] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.688676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.688686] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.556 [2024-06-11 03:55:52.688854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.556 [2024-06-11 03:55:52.688864] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.556 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.689022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.689032] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.689196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.689206] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.689379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.689389] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.689588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.689597] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.689799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.689809] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.690084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.690094] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.690276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.690285] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.690465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.690474] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.690718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.690727] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.690995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.691005] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.691183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.691193] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.691443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.691454] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:11.557 [2024-06-11 03:55:52.691632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.691643] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:59:11.557 [2024-06-11 03:55:52.691954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.691964] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.692191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.692202] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:11.557 [2024-06-11 03:55:52.692442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.692453] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:11.557 [2024-06-11 03:55:52.692647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.692657] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.692886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.692896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.692998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.693008] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.693241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.693251] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.693426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.693436] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.693617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.693627] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.693833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.693843] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.694081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.694092] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.694338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.694347] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.694465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.694476] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.694659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.694669] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.694836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.694846] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.695026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.695036] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.695215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.695225] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.695327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.695337] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.695519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.695529] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.695761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.695770] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.695963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.695973] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.696196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.696207] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.696397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.557 [2024-06-11 03:55:52.696407] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.557 qpair failed and we were unable to recover it. 00:59:11.557 [2024-06-11 03:55:52.696600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.696610] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.696853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.696863] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.697124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.697134] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.697376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.697385] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.697493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.697503] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.697745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.697755] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.697924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.697934] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.698167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.698177] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.698350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.698360] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.698528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.698537] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.698784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.698794] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.698886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.698896] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.699031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.699040] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.699222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.699231] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.699407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.699416] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:11.558 [2024-06-11 03:55:52.699683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.699693] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.699929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.699939] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:59:11.558 [2024-06-11 03:55:52.700164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.700174] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.700283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.700294] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:11.558 [2024-06-11 03:55:52.700459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.700468] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:11.558 [2024-06-11 03:55:52.700576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.700586] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.700763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.700772] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.701018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.701028] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.701196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.701205] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.701322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.701332] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.701454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.701464] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.701722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.701732] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.701895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.701905] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.702071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.702081] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.702203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.702212] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.702329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.558 [2024-06-11 03:55:52.702339] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.558 qpair failed and we were unable to recover it. 00:59:11.558 [2024-06-11 03:55:52.702514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.559 [2024-06-11 03:55:52.702524] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.559 qpair failed and we were unable to recover it. 00:59:11.559 [2024-06-11 03:55:52.702689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.559 [2024-06-11 03:55:52.702698] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.559 qpair failed and we were unable to recover it. 00:59:11.559 [2024-06-11 03:55:52.702857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.559 [2024-06-11 03:55:52.702867] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.559 qpair failed and we were unable to recover it. 00:59:11.559 [2024-06-11 03:55:52.703056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:59:11.559 [2024-06-11 03:55:52.703066] nvme_tcp.c:2378:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f01b0000b90 with addr=10.0.0.2, port=4420 00:59:11.559 qpair failed and we were unable to recover it. 00:59:11.559 [2024-06-11 03:55:52.703120] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:59:11.559 [2024-06-11 03:55:52.704666] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.559 [2024-06-11 03:55:52.704751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.559 [2024-06-11 03:55:52.704771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.559 [2024-06-11 03:55:52.704779] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.559 [2024-06-11 03:55:52.704786] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.559 [2024-06-11 03:55:52.704805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.559 qpair failed and we were unable to recover it. 00:59:11.559 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:11.559 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:59:11.559 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:11.559 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:11.559 [2024-06-11 03:55:52.714612] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.559 [2024-06-11 03:55:52.714686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.559 [2024-06-11 03:55:52.714704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.559 [2024-06-11 03:55:52.714711] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.559 [2024-06-11 03:55:52.714717] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.559 [2024-06-11 03:55:52.714733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.559 qpair failed and we were unable to recover it. 00:59:11.559 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:11.559 03:55:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2412792 00:59:11.559 [2024-06-11 03:55:52.724579] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.559 [2024-06-11 03:55:52.724645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.559 [2024-06-11 03:55:52.724661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.559 [2024-06-11 03:55:52.724668] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.559 [2024-06-11 03:55:52.724675] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.559 [2024-06-11 03:55:52.724689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.559 qpair failed and we were unable to recover it. 00:59:11.559 [2024-06-11 03:55:52.734629] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.559 [2024-06-11 03:55:52.734703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.559 [2024-06-11 03:55:52.734719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.559 [2024-06-11 03:55:52.734727] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.559 [2024-06-11 03:55:52.734735] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.559 [2024-06-11 03:55:52.734751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.559 qpair failed and we were unable to recover it. 00:59:11.559 [2024-06-11 03:55:52.744642] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.559 [2024-06-11 03:55:52.744713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.559 [2024-06-11 03:55:52.744728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.559 [2024-06-11 03:55:52.744735] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.559 [2024-06-11 03:55:52.744741] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.559 [2024-06-11 03:55:52.744761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.559 qpair failed and we were unable to recover it. 00:59:11.559 [2024-06-11 03:55:52.754606] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.559 [2024-06-11 03:55:52.754664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.559 [2024-06-11 03:55:52.754682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.559 [2024-06-11 03:55:52.754688] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.559 [2024-06-11 03:55:52.754694] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.559 [2024-06-11 03:55:52.754709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.559 qpair failed and we were unable to recover it. 00:59:11.559 [2024-06-11 03:55:52.764623] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.559 [2024-06-11 03:55:52.764683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.559 [2024-06-11 03:55:52.764699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.559 [2024-06-11 03:55:52.764706] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.559 [2024-06-11 03:55:52.764712] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.559 [2024-06-11 03:55:52.764728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.559 qpair failed and we were unable to recover it. 00:59:11.559 [2024-06-11 03:55:52.774632] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.559 [2024-06-11 03:55:52.774768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.559 [2024-06-11 03:55:52.774784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.559 [2024-06-11 03:55:52.774792] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.559 [2024-06-11 03:55:52.774798] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.559 [2024-06-11 03:55:52.774813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.559 qpair failed and we were unable to recover it. 00:59:11.559 [2024-06-11 03:55:52.784767] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.559 [2024-06-11 03:55:52.784830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.559 [2024-06-11 03:55:52.784846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.559 [2024-06-11 03:55:52.784853] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.559 [2024-06-11 03:55:52.784860] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.559 [2024-06-11 03:55:52.784874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.559 qpair failed and we were unable to recover it. 00:59:11.559 [2024-06-11 03:55:52.794750] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.559 [2024-06-11 03:55:52.794811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.559 [2024-06-11 03:55:52.794825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.559 [2024-06-11 03:55:52.794831] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.559 [2024-06-11 03:55:52.794840] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.559 [2024-06-11 03:55:52.794855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.559 qpair failed and we were unable to recover it. 00:59:11.559 [2024-06-11 03:55:52.804811] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.559 [2024-06-11 03:55:52.804873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.559 [2024-06-11 03:55:52.804888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.559 [2024-06-11 03:55:52.804894] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.559 [2024-06-11 03:55:52.804901] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.559 [2024-06-11 03:55:52.804915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.559 qpair failed and we were unable to recover it. 00:59:11.560 [2024-06-11 03:55:52.814857] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.560 [2024-06-11 03:55:52.814954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.560 [2024-06-11 03:55:52.814969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.560 [2024-06-11 03:55:52.814976] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.560 [2024-06-11 03:55:52.814982] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.560 [2024-06-11 03:55:52.814996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.560 qpair failed and we were unable to recover it. 00:59:11.560 [2024-06-11 03:55:52.824880] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.560 [2024-06-11 03:55:52.824942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.560 [2024-06-11 03:55:52.824956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.560 [2024-06-11 03:55:52.824962] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.560 [2024-06-11 03:55:52.824968] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.560 [2024-06-11 03:55:52.824982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.560 qpair failed and we were unable to recover it. 00:59:11.560 [2024-06-11 03:55:52.834925] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.560 [2024-06-11 03:55:52.834990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.560 [2024-06-11 03:55:52.835004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.560 [2024-06-11 03:55:52.835015] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.560 [2024-06-11 03:55:52.835021] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.560 [2024-06-11 03:55:52.835036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.560 qpair failed and we were unable to recover it. 00:59:11.560 [2024-06-11 03:55:52.844927] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.560 [2024-06-11 03:55:52.845077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.560 [2024-06-11 03:55:52.845093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.560 [2024-06-11 03:55:52.845100] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.560 [2024-06-11 03:55:52.845106] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.560 [2024-06-11 03:55:52.845122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.560 qpair failed and we were unable to recover it. 00:59:11.560 [2024-06-11 03:55:52.854922] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.560 [2024-06-11 03:55:52.854995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.560 [2024-06-11 03:55:52.855015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.560 [2024-06-11 03:55:52.855022] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.560 [2024-06-11 03:55:52.855028] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.560 [2024-06-11 03:55:52.855043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.560 qpair failed and we were unable to recover it. 00:59:11.560 [2024-06-11 03:55:52.864946] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.560 [2024-06-11 03:55:52.865016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.560 [2024-06-11 03:55:52.865031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.560 [2024-06-11 03:55:52.865038] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.560 [2024-06-11 03:55:52.865044] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.560 [2024-06-11 03:55:52.865058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.560 qpair failed and we were unable to recover it. 00:59:11.560 [2024-06-11 03:55:52.875006] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.560 [2024-06-11 03:55:52.875115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.560 [2024-06-11 03:55:52.875130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.560 [2024-06-11 03:55:52.875137] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.560 [2024-06-11 03:55:52.875143] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.560 [2024-06-11 03:55:52.875157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.560 qpair failed and we were unable to recover it. 00:59:11.560 [2024-06-11 03:55:52.884944] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.560 [2024-06-11 03:55:52.885006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.560 [2024-06-11 03:55:52.885025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.560 [2024-06-11 03:55:52.885032] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.560 [2024-06-11 03:55:52.885044] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.560 [2024-06-11 03:55:52.885059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.560 qpair failed and we were unable to recover it. 00:59:11.560 [2024-06-11 03:55:52.895087] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.560 [2024-06-11 03:55:52.895162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.560 [2024-06-11 03:55:52.895176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.560 [2024-06-11 03:55:52.895182] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.560 [2024-06-11 03:55:52.895189] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.560 [2024-06-11 03:55:52.895203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.560 qpair failed and we were unable to recover it. 00:59:11.560 [2024-06-11 03:55:52.905089] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.560 [2024-06-11 03:55:52.905153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.560 [2024-06-11 03:55:52.905167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.560 [2024-06-11 03:55:52.905174] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.560 [2024-06-11 03:55:52.905180] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.560 [2024-06-11 03:55:52.905194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.560 qpair failed and we were unable to recover it. 00:59:11.560 [2024-06-11 03:55:52.915141] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.560 [2024-06-11 03:55:52.915202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.560 [2024-06-11 03:55:52.915216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.560 [2024-06-11 03:55:52.915222] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.560 [2024-06-11 03:55:52.915229] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.560 [2024-06-11 03:55:52.915243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.560 qpair failed and we were unable to recover it. 00:59:11.560 [2024-06-11 03:55:52.925094] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.560 [2024-06-11 03:55:52.925173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.560 [2024-06-11 03:55:52.925188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.560 [2024-06-11 03:55:52.925194] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.560 [2024-06-11 03:55:52.925200] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.560 [2024-06-11 03:55:52.925214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.560 qpair failed and we were unable to recover it. 00:59:11.560 [2024-06-11 03:55:52.935121] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.560 [2024-06-11 03:55:52.935183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.560 [2024-06-11 03:55:52.935200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.560 [2024-06-11 03:55:52.935207] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.560 [2024-06-11 03:55:52.935213] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.560 [2024-06-11 03:55:52.935228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.560 qpair failed and we were unable to recover it. 00:59:11.560 [2024-06-11 03:55:52.945146] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.560 [2024-06-11 03:55:52.945217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.560 [2024-06-11 03:55:52.945232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.561 [2024-06-11 03:55:52.945239] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.561 [2024-06-11 03:55:52.945245] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.561 [2024-06-11 03:55:52.945260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.561 qpair failed and we were unable to recover it. 00:59:11.820 [2024-06-11 03:55:52.955218] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.820 [2024-06-11 03:55:52.955298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.820 [2024-06-11 03:55:52.955314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.820 [2024-06-11 03:55:52.955320] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.820 [2024-06-11 03:55:52.955327] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.820 [2024-06-11 03:55:52.955341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.820 qpair failed and we were unable to recover it. 00:59:11.820 [2024-06-11 03:55:52.965289] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.820 [2024-06-11 03:55:52.965384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.820 [2024-06-11 03:55:52.965400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.820 [2024-06-11 03:55:52.965406] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.820 [2024-06-11 03:55:52.965412] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.820 [2024-06-11 03:55:52.965427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.820 qpair failed and we were unable to recover it. 00:59:11.820 [2024-06-11 03:55:52.975200] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.820 [2024-06-11 03:55:52.975272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.820 [2024-06-11 03:55:52.975286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.820 [2024-06-11 03:55:52.975296] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.820 [2024-06-11 03:55:52.975302] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.820 [2024-06-11 03:55:52.975317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.820 qpair failed and we were unable to recover it. 00:59:11.820 [2024-06-11 03:55:52.985275] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.820 [2024-06-11 03:55:52.985338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.820 [2024-06-11 03:55:52.985352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.820 [2024-06-11 03:55:52.985359] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.820 [2024-06-11 03:55:52.985365] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.820 [2024-06-11 03:55:52.985379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.820 qpair failed and we were unable to recover it. 00:59:11.820 [2024-06-11 03:55:52.995332] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.820 [2024-06-11 03:55:52.995396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.820 [2024-06-11 03:55:52.995411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.820 [2024-06-11 03:55:52.995418] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.820 [2024-06-11 03:55:52.995424] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.820 [2024-06-11 03:55:52.995438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.820 qpair failed and we were unable to recover it. 00:59:11.820 [2024-06-11 03:55:53.005363] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.820 [2024-06-11 03:55:53.005424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.820 [2024-06-11 03:55:53.005438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.820 [2024-06-11 03:55:53.005444] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.820 [2024-06-11 03:55:53.005450] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.820 [2024-06-11 03:55:53.005464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.820 qpair failed and we were unable to recover it. 00:59:11.820 [2024-06-11 03:55:53.015366] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.820 [2024-06-11 03:55:53.015469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.820 [2024-06-11 03:55:53.015484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.820 [2024-06-11 03:55:53.015491] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.820 [2024-06-11 03:55:53.015497] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.820 [2024-06-11 03:55:53.015512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.820 qpair failed and we were unable to recover it. 00:59:11.820 [2024-06-11 03:55:53.025314] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.820 [2024-06-11 03:55:53.025379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.820 [2024-06-11 03:55:53.025393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.820 [2024-06-11 03:55:53.025399] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.820 [2024-06-11 03:55:53.025405] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.820 [2024-06-11 03:55:53.025419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.820 qpair failed and we were unable to recover it. 00:59:11.820 [2024-06-11 03:55:53.035421] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.820 [2024-06-11 03:55:53.035506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.820 [2024-06-11 03:55:53.035521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.820 [2024-06-11 03:55:53.035527] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.820 [2024-06-11 03:55:53.035534] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.820 [2024-06-11 03:55:53.035548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.820 qpair failed and we were unable to recover it. 00:59:11.820 [2024-06-11 03:55:53.045378] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.820 [2024-06-11 03:55:53.045438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.820 [2024-06-11 03:55:53.045453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.820 [2024-06-11 03:55:53.045460] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.820 [2024-06-11 03:55:53.045466] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.820 [2024-06-11 03:55:53.045480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.820 qpair failed and we were unable to recover it. 00:59:11.820 [2024-06-11 03:55:53.055453] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.820 [2024-06-11 03:55:53.055514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.820 [2024-06-11 03:55:53.055528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.820 [2024-06-11 03:55:53.055534] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.820 [2024-06-11 03:55:53.055540] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.820 [2024-06-11 03:55:53.055555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.820 qpair failed and we were unable to recover it. 00:59:11.820 [2024-06-11 03:55:53.065420] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.820 [2024-06-11 03:55:53.065485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.820 [2024-06-11 03:55:53.065502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.821 [2024-06-11 03:55:53.065509] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.821 [2024-06-11 03:55:53.065515] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.821 [2024-06-11 03:55:53.065529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.821 qpair failed and we were unable to recover it. 00:59:11.821 [2024-06-11 03:55:53.075459] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.821 [2024-06-11 03:55:53.075521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.821 [2024-06-11 03:55:53.075535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.821 [2024-06-11 03:55:53.075541] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.821 [2024-06-11 03:55:53.075548] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.821 [2024-06-11 03:55:53.075562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.821 qpair failed and we were unable to recover it. 00:59:11.821 [2024-06-11 03:55:53.085472] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.821 [2024-06-11 03:55:53.085537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.821 [2024-06-11 03:55:53.085551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.821 [2024-06-11 03:55:53.085558] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.821 [2024-06-11 03:55:53.085564] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.821 [2024-06-11 03:55:53.085578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.821 qpair failed and we were unable to recover it. 00:59:11.821 [2024-06-11 03:55:53.095575] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.821 [2024-06-11 03:55:53.095636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.821 [2024-06-11 03:55:53.095650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.821 [2024-06-11 03:55:53.095656] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.821 [2024-06-11 03:55:53.095662] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.821 [2024-06-11 03:55:53.095676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.821 qpair failed and we were unable to recover it. 00:59:11.821 [2024-06-11 03:55:53.105599] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.821 [2024-06-11 03:55:53.105656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.821 [2024-06-11 03:55:53.105670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.821 [2024-06-11 03:55:53.105677] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.821 [2024-06-11 03:55:53.105683] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.821 [2024-06-11 03:55:53.105699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.821 qpair failed and we were unable to recover it. 00:59:11.821 [2024-06-11 03:55:53.115685] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.821 [2024-06-11 03:55:53.115763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.821 [2024-06-11 03:55:53.115778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.821 [2024-06-11 03:55:53.115784] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.821 [2024-06-11 03:55:53.115790] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.821 [2024-06-11 03:55:53.115805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.821 qpair failed and we were unable to recover it. 00:59:11.821 [2024-06-11 03:55:53.125695] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.821 [2024-06-11 03:55:53.125760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.821 [2024-06-11 03:55:53.125774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.821 [2024-06-11 03:55:53.125781] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.821 [2024-06-11 03:55:53.125787] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.821 [2024-06-11 03:55:53.125801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.821 qpair failed and we were unable to recover it. 00:59:11.821 [2024-06-11 03:55:53.135682] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.821 [2024-06-11 03:55:53.135744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.821 [2024-06-11 03:55:53.135757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.821 [2024-06-11 03:55:53.135764] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.821 [2024-06-11 03:55:53.135770] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.821 [2024-06-11 03:55:53.135784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.821 qpair failed and we were unable to recover it. 00:59:11.821 [2024-06-11 03:55:53.145727] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.821 [2024-06-11 03:55:53.145792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.821 [2024-06-11 03:55:53.145806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.821 [2024-06-11 03:55:53.145813] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.821 [2024-06-11 03:55:53.145819] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.821 [2024-06-11 03:55:53.145833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.821 qpair failed and we were unable to recover it. 00:59:11.821 [2024-06-11 03:55:53.155745] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.821 [2024-06-11 03:55:53.155805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.821 [2024-06-11 03:55:53.155823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.821 [2024-06-11 03:55:53.155829] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.821 [2024-06-11 03:55:53.155835] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.821 [2024-06-11 03:55:53.155850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.821 qpair failed and we were unable to recover it. 00:59:11.821 [2024-06-11 03:55:53.165785] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.821 [2024-06-11 03:55:53.165844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.821 [2024-06-11 03:55:53.165858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.821 [2024-06-11 03:55:53.165865] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.821 [2024-06-11 03:55:53.165871] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.821 [2024-06-11 03:55:53.165885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.821 qpair failed and we were unable to recover it. 00:59:11.821 [2024-06-11 03:55:53.175830] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.821 [2024-06-11 03:55:53.175938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.821 [2024-06-11 03:55:53.175953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.821 [2024-06-11 03:55:53.175959] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.821 [2024-06-11 03:55:53.175966] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.821 [2024-06-11 03:55:53.175980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.821 qpair failed and we were unable to recover it. 00:59:11.821 [2024-06-11 03:55:53.185830] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.821 [2024-06-11 03:55:53.185889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.821 [2024-06-11 03:55:53.185903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.821 [2024-06-11 03:55:53.185910] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.821 [2024-06-11 03:55:53.185916] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.821 [2024-06-11 03:55:53.185931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.821 qpair failed and we were unable to recover it. 00:59:11.821 [2024-06-11 03:55:53.195825] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.821 [2024-06-11 03:55:53.195881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.821 [2024-06-11 03:55:53.195895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.821 [2024-06-11 03:55:53.195902] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.821 [2024-06-11 03:55:53.195908] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.821 [2024-06-11 03:55:53.195925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.821 qpair failed and we were unable to recover it. 00:59:11.822 [2024-06-11 03:55:53.205914] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.822 [2024-06-11 03:55:53.205994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.822 [2024-06-11 03:55:53.206014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.822 [2024-06-11 03:55:53.206021] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.822 [2024-06-11 03:55:53.206027] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.822 [2024-06-11 03:55:53.206042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.822 qpair failed and we were unable to recover it. 00:59:11.822 [2024-06-11 03:55:53.215907] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:11.822 [2024-06-11 03:55:53.215969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:11.822 [2024-06-11 03:55:53.215983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:11.822 [2024-06-11 03:55:53.215990] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:11.822 [2024-06-11 03:55:53.215996] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:11.822 [2024-06-11 03:55:53.216014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:11.822 qpair failed and we were unable to recover it. 00:59:12.081 [2024-06-11 03:55:53.225945] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.081 [2024-06-11 03:55:53.226020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.081 [2024-06-11 03:55:53.226034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.081 [2024-06-11 03:55:53.226040] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.081 [2024-06-11 03:55:53.226046] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.081 [2024-06-11 03:55:53.226061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.081 qpair failed and we were unable to recover it. 00:59:12.081 [2024-06-11 03:55:53.235958] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.081 [2024-06-11 03:55:53.236023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.081 [2024-06-11 03:55:53.236038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.081 [2024-06-11 03:55:53.236045] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.081 [2024-06-11 03:55:53.236051] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.081 [2024-06-11 03:55:53.236066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.081 qpair failed and we were unable to recover it. 00:59:12.081 [2024-06-11 03:55:53.246004] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.081 [2024-06-11 03:55:53.246071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.081 [2024-06-11 03:55:53.246086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.081 [2024-06-11 03:55:53.246092] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.081 [2024-06-11 03:55:53.246098] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.081 [2024-06-11 03:55:53.246113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.081 qpair failed and we were unable to recover it. 00:59:12.081 [2024-06-11 03:55:53.256044] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.081 [2024-06-11 03:55:53.256108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.081 [2024-06-11 03:55:53.256122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.081 [2024-06-11 03:55:53.256128] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.081 [2024-06-11 03:55:53.256134] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.081 [2024-06-11 03:55:53.256148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.081 qpair failed and we were unable to recover it. 00:59:12.081 [2024-06-11 03:55:53.266047] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.081 [2024-06-11 03:55:53.266109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.081 [2024-06-11 03:55:53.266123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.081 [2024-06-11 03:55:53.266129] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.081 [2024-06-11 03:55:53.266135] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.081 [2024-06-11 03:55:53.266149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.081 qpair failed and we were unable to recover it. 00:59:12.081 [2024-06-11 03:55:53.276134] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.081 [2024-06-11 03:55:53.276190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.081 [2024-06-11 03:55:53.276205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.081 [2024-06-11 03:55:53.276211] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.081 [2024-06-11 03:55:53.276217] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.081 [2024-06-11 03:55:53.276231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.081 qpair failed and we were unable to recover it. 00:59:12.081 [2024-06-11 03:55:53.286141] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.081 [2024-06-11 03:55:53.286251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.081 [2024-06-11 03:55:53.286267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.081 [2024-06-11 03:55:53.286274] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.081 [2024-06-11 03:55:53.286284] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.081 [2024-06-11 03:55:53.286298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.081 qpair failed and we were unable to recover it. 00:59:12.081 [2024-06-11 03:55:53.296141] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.081 [2024-06-11 03:55:53.296203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.081 [2024-06-11 03:55:53.296217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.081 [2024-06-11 03:55:53.296223] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.081 [2024-06-11 03:55:53.296229] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.081 [2024-06-11 03:55:53.296244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.081 qpair failed and we were unable to recover it. 00:59:12.081 [2024-06-11 03:55:53.306161] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.081 [2024-06-11 03:55:53.306227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.081 [2024-06-11 03:55:53.306242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.081 [2024-06-11 03:55:53.306248] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.081 [2024-06-11 03:55:53.306254] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.081 [2024-06-11 03:55:53.306268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.081 qpair failed and we were unable to recover it. 00:59:12.081 [2024-06-11 03:55:53.316206] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.081 [2024-06-11 03:55:53.316266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.081 [2024-06-11 03:55:53.316280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.081 [2024-06-11 03:55:53.316287] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.081 [2024-06-11 03:55:53.316293] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.081 [2024-06-11 03:55:53.316307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.081 qpair failed and we were unable to recover it. 00:59:12.081 [2024-06-11 03:55:53.326242] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.081 [2024-06-11 03:55:53.326296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.081 [2024-06-11 03:55:53.326311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.081 [2024-06-11 03:55:53.326317] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.082 [2024-06-11 03:55:53.326323] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.082 [2024-06-11 03:55:53.326337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.082 qpair failed and we were unable to recover it. 00:59:12.082 [2024-06-11 03:55:53.336338] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.082 [2024-06-11 03:55:53.336422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.082 [2024-06-11 03:55:53.336437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.082 [2024-06-11 03:55:53.336444] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.082 [2024-06-11 03:55:53.336450] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.082 [2024-06-11 03:55:53.336465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.082 qpair failed and we were unable to recover it. 00:59:12.082 [2024-06-11 03:55:53.346291] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.082 [2024-06-11 03:55:53.346352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.082 [2024-06-11 03:55:53.346367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.082 [2024-06-11 03:55:53.346373] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.082 [2024-06-11 03:55:53.346380] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.082 [2024-06-11 03:55:53.346394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.082 qpair failed and we were unable to recover it. 00:59:12.082 [2024-06-11 03:55:53.356315] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.082 [2024-06-11 03:55:53.356374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.082 [2024-06-11 03:55:53.356388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.082 [2024-06-11 03:55:53.356394] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.082 [2024-06-11 03:55:53.356400] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.082 [2024-06-11 03:55:53.356414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.082 qpair failed and we were unable to recover it. 00:59:12.082 [2024-06-11 03:55:53.366386] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.082 [2024-06-11 03:55:53.366444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.082 [2024-06-11 03:55:53.366459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.082 [2024-06-11 03:55:53.366465] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.082 [2024-06-11 03:55:53.366471] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.082 [2024-06-11 03:55:53.366485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.082 qpair failed and we were unable to recover it. 00:59:12.082 [2024-06-11 03:55:53.376378] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.082 [2024-06-11 03:55:53.376438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.082 [2024-06-11 03:55:53.376452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.082 [2024-06-11 03:55:53.376461] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.082 [2024-06-11 03:55:53.376467] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.082 [2024-06-11 03:55:53.376482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.082 qpair failed and we were unable to recover it. 00:59:12.082 [2024-06-11 03:55:53.386397] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.082 [2024-06-11 03:55:53.386454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.082 [2024-06-11 03:55:53.386469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.082 [2024-06-11 03:55:53.386475] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.082 [2024-06-11 03:55:53.386481] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.082 [2024-06-11 03:55:53.386495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.082 qpair failed and we were unable to recover it. 00:59:12.082 [2024-06-11 03:55:53.396430] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.082 [2024-06-11 03:55:53.396491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.082 [2024-06-11 03:55:53.396505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.082 [2024-06-11 03:55:53.396511] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.082 [2024-06-11 03:55:53.396518] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.082 [2024-06-11 03:55:53.396532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.082 qpair failed and we were unable to recover it. 00:59:12.082 [2024-06-11 03:55:53.406442] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.082 [2024-06-11 03:55:53.406523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.082 [2024-06-11 03:55:53.406538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.082 [2024-06-11 03:55:53.406544] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.082 [2024-06-11 03:55:53.406550] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.082 [2024-06-11 03:55:53.406564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.082 qpair failed and we were unable to recover it. 00:59:12.082 [2024-06-11 03:55:53.416541] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.082 [2024-06-11 03:55:53.416601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.082 [2024-06-11 03:55:53.416615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.082 [2024-06-11 03:55:53.416622] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.082 [2024-06-11 03:55:53.416627] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.082 [2024-06-11 03:55:53.416642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.082 qpair failed and we were unable to recover it. 00:59:12.082 [2024-06-11 03:55:53.426514] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.082 [2024-06-11 03:55:53.426578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.082 [2024-06-11 03:55:53.426593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.082 [2024-06-11 03:55:53.426599] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.082 [2024-06-11 03:55:53.426606] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.082 [2024-06-11 03:55:53.426620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.082 qpair failed and we were unable to recover it. 00:59:12.082 [2024-06-11 03:55:53.436475] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.082 [2024-06-11 03:55:53.436534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.082 [2024-06-11 03:55:53.436548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.082 [2024-06-11 03:55:53.436554] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.082 [2024-06-11 03:55:53.436560] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.082 [2024-06-11 03:55:53.436574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.082 qpair failed and we were unable to recover it. 00:59:12.082 [2024-06-11 03:55:53.446572] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.082 [2024-06-11 03:55:53.446629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.082 [2024-06-11 03:55:53.446643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.082 [2024-06-11 03:55:53.446649] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.082 [2024-06-11 03:55:53.446655] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.082 [2024-06-11 03:55:53.446669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.082 qpair failed and we were unable to recover it. 00:59:12.082 [2024-06-11 03:55:53.456609] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.082 [2024-06-11 03:55:53.456679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.082 [2024-06-11 03:55:53.456693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.082 [2024-06-11 03:55:53.456700] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.082 [2024-06-11 03:55:53.456706] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.082 [2024-06-11 03:55:53.456720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.082 qpair failed and we were unable to recover it. 00:59:12.082 [2024-06-11 03:55:53.466557] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.083 [2024-06-11 03:55:53.466618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.083 [2024-06-11 03:55:53.466631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.083 [2024-06-11 03:55:53.466641] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.083 [2024-06-11 03:55:53.466647] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.083 [2024-06-11 03:55:53.466661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.083 qpair failed and we were unable to recover it. 00:59:12.083 [2024-06-11 03:55:53.476710] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.083 [2024-06-11 03:55:53.476767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.083 [2024-06-11 03:55:53.476781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.083 [2024-06-11 03:55:53.476787] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.083 [2024-06-11 03:55:53.476793] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.083 [2024-06-11 03:55:53.476807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.083 qpair failed and we were unable to recover it. 00:59:12.341 [2024-06-11 03:55:53.486682] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.341 [2024-06-11 03:55:53.486750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.342 [2024-06-11 03:55:53.486765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.342 [2024-06-11 03:55:53.486771] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.342 [2024-06-11 03:55:53.486778] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.342 [2024-06-11 03:55:53.486792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.342 qpair failed and we were unable to recover it. 00:59:12.342 [2024-06-11 03:55:53.496759] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.342 [2024-06-11 03:55:53.496865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.342 [2024-06-11 03:55:53.496881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.342 [2024-06-11 03:55:53.496887] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.342 [2024-06-11 03:55:53.496893] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.342 [2024-06-11 03:55:53.496908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.342 qpair failed and we were unable to recover it. 00:59:12.342 [2024-06-11 03:55:53.506704] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.342 [2024-06-11 03:55:53.506770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.342 [2024-06-11 03:55:53.506784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.342 [2024-06-11 03:55:53.506791] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.342 [2024-06-11 03:55:53.506797] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.342 [2024-06-11 03:55:53.506811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.342 qpair failed and we were unable to recover it. 00:59:12.342 [2024-06-11 03:55:53.516782] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.342 [2024-06-11 03:55:53.516840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.342 [2024-06-11 03:55:53.516855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.342 [2024-06-11 03:55:53.516861] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.342 [2024-06-11 03:55:53.516867] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.342 [2024-06-11 03:55:53.516881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.342 qpair failed and we were unable to recover it. 00:59:12.342 [2024-06-11 03:55:53.526836] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.342 [2024-06-11 03:55:53.526892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.342 [2024-06-11 03:55:53.526908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.342 [2024-06-11 03:55:53.526914] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.342 [2024-06-11 03:55:53.526919] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.342 [2024-06-11 03:55:53.526934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.342 qpair failed and we were unable to recover it. 00:59:12.342 [2024-06-11 03:55:53.536832] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.342 [2024-06-11 03:55:53.536894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.342 [2024-06-11 03:55:53.536908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.342 [2024-06-11 03:55:53.536914] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.342 [2024-06-11 03:55:53.536920] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.342 [2024-06-11 03:55:53.536935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.342 qpair failed and we were unable to recover it. 00:59:12.342 [2024-06-11 03:55:53.546857] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.342 [2024-06-11 03:55:53.546952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.342 [2024-06-11 03:55:53.546967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.342 [2024-06-11 03:55:53.546973] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.342 [2024-06-11 03:55:53.546979] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.342 [2024-06-11 03:55:53.546994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.342 qpair failed and we were unable to recover it. 00:59:12.342 [2024-06-11 03:55:53.556888] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.342 [2024-06-11 03:55:53.556966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.342 [2024-06-11 03:55:53.556984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.342 [2024-06-11 03:55:53.556991] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.342 [2024-06-11 03:55:53.556996] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.342 [2024-06-11 03:55:53.557014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.342 qpair failed and we were unable to recover it. 00:59:12.342 [2024-06-11 03:55:53.566943] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.342 [2024-06-11 03:55:53.567016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.342 [2024-06-11 03:55:53.567031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.342 [2024-06-11 03:55:53.567037] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.342 [2024-06-11 03:55:53.567043] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.342 [2024-06-11 03:55:53.567058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.342 qpair failed and we were unable to recover it. 00:59:12.342 [2024-06-11 03:55:53.576946] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.342 [2024-06-11 03:55:53.577004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.342 [2024-06-11 03:55:53.577021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.342 [2024-06-11 03:55:53.577028] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.342 [2024-06-11 03:55:53.577034] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.342 [2024-06-11 03:55:53.577048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.342 qpair failed and we were unable to recover it. 00:59:12.342 [2024-06-11 03:55:53.586963] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.342 [2024-06-11 03:55:53.587045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.342 [2024-06-11 03:55:53.587061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.342 [2024-06-11 03:55:53.587068] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.342 [2024-06-11 03:55:53.587074] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.342 [2024-06-11 03:55:53.587090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.342 qpair failed and we were unable to recover it. 00:59:12.342 [2024-06-11 03:55:53.597027] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.342 [2024-06-11 03:55:53.597092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.342 [2024-06-11 03:55:53.597106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.342 [2024-06-11 03:55:53.597114] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.342 [2024-06-11 03:55:53.597120] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.342 [2024-06-11 03:55:53.597139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.342 qpair failed and we were unable to recover it. 00:59:12.342 [2024-06-11 03:55:53.607006] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.342 [2024-06-11 03:55:53.607070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.342 [2024-06-11 03:55:53.607085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.342 [2024-06-11 03:55:53.607092] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.342 [2024-06-11 03:55:53.607098] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.342 [2024-06-11 03:55:53.607112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.342 qpair failed and we were unable to recover it. 00:59:12.342 [2024-06-11 03:55:53.617143] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.342 [2024-06-11 03:55:53.617207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.342 [2024-06-11 03:55:53.617222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.342 [2024-06-11 03:55:53.617229] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.343 [2024-06-11 03:55:53.617235] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.343 [2024-06-11 03:55:53.617250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.343 qpair failed and we were unable to recover it. 00:59:12.343 [2024-06-11 03:55:53.627076] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.343 [2024-06-11 03:55:53.627139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.343 [2024-06-11 03:55:53.627153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.343 [2024-06-11 03:55:53.627159] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.343 [2024-06-11 03:55:53.627165] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.343 [2024-06-11 03:55:53.627179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.343 qpair failed and we were unable to recover it. 00:59:12.343 [2024-06-11 03:55:53.637107] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.343 [2024-06-11 03:55:53.637165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.343 [2024-06-11 03:55:53.637179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.343 [2024-06-11 03:55:53.637186] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.343 [2024-06-11 03:55:53.637192] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.343 [2024-06-11 03:55:53.637207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.343 qpair failed and we were unable to recover it. 00:59:12.343 [2024-06-11 03:55:53.647155] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.343 [2024-06-11 03:55:53.647216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.343 [2024-06-11 03:55:53.647234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.343 [2024-06-11 03:55:53.647240] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.343 [2024-06-11 03:55:53.647247] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.343 [2024-06-11 03:55:53.647261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.343 qpair failed and we were unable to recover it. 00:59:12.343 [2024-06-11 03:55:53.657191] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.343 [2024-06-11 03:55:53.657253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.343 [2024-06-11 03:55:53.657268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.343 [2024-06-11 03:55:53.657274] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.343 [2024-06-11 03:55:53.657280] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.343 [2024-06-11 03:55:53.657295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.343 qpair failed and we were unable to recover it. 00:59:12.343 [2024-06-11 03:55:53.667200] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.343 [2024-06-11 03:55:53.667262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.343 [2024-06-11 03:55:53.667278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.343 [2024-06-11 03:55:53.667285] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.343 [2024-06-11 03:55:53.667291] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.343 [2024-06-11 03:55:53.667306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.343 qpair failed and we were unable to recover it. 00:59:12.343 [2024-06-11 03:55:53.677236] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.343 [2024-06-11 03:55:53.677346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.343 [2024-06-11 03:55:53.677361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.343 [2024-06-11 03:55:53.677368] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.343 [2024-06-11 03:55:53.677374] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.343 [2024-06-11 03:55:53.677388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.343 qpair failed and we were unable to recover it. 00:59:12.343 [2024-06-11 03:55:53.687288] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.343 [2024-06-11 03:55:53.687349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.343 [2024-06-11 03:55:53.687363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.343 [2024-06-11 03:55:53.687369] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.343 [2024-06-11 03:55:53.687378] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.343 [2024-06-11 03:55:53.687393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.343 qpair failed and we were unable to recover it. 00:59:12.343 [2024-06-11 03:55:53.697295] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.343 [2024-06-11 03:55:53.697356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.343 [2024-06-11 03:55:53.697370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.343 [2024-06-11 03:55:53.697377] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.343 [2024-06-11 03:55:53.697383] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.343 [2024-06-11 03:55:53.697397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.343 qpair failed and we were unable to recover it. 00:59:12.343 [2024-06-11 03:55:53.707352] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.343 [2024-06-11 03:55:53.707415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.343 [2024-06-11 03:55:53.707429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.343 [2024-06-11 03:55:53.707435] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.343 [2024-06-11 03:55:53.707441] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.343 [2024-06-11 03:55:53.707455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.343 qpair failed and we were unable to recover it. 00:59:12.343 [2024-06-11 03:55:53.717404] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.343 [2024-06-11 03:55:53.717465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.343 [2024-06-11 03:55:53.717478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.343 [2024-06-11 03:55:53.717484] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.343 [2024-06-11 03:55:53.717491] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.343 [2024-06-11 03:55:53.717505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.343 qpair failed and we were unable to recover it. 00:59:12.343 [2024-06-11 03:55:53.727376] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.343 [2024-06-11 03:55:53.727439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.343 [2024-06-11 03:55:53.727453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.343 [2024-06-11 03:55:53.727460] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.343 [2024-06-11 03:55:53.727466] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.343 [2024-06-11 03:55:53.727480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.343 qpair failed and we were unable to recover it. 00:59:12.343 [2024-06-11 03:55:53.737411] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.343 [2024-06-11 03:55:53.737477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.343 [2024-06-11 03:55:53.737491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.343 [2024-06-11 03:55:53.737497] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.343 [2024-06-11 03:55:53.737503] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.343 [2024-06-11 03:55:53.737517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.343 qpair failed and we were unable to recover it. 00:59:12.602 [2024-06-11 03:55:53.747472] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.602 [2024-06-11 03:55:53.747581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.602 [2024-06-11 03:55:53.747597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.602 [2024-06-11 03:55:53.747603] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.602 [2024-06-11 03:55:53.747610] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.602 [2024-06-11 03:55:53.747624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.602 qpair failed and we were unable to recover it. 00:59:12.602 [2024-06-11 03:55:53.757459] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.602 [2024-06-11 03:55:53.757523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.602 [2024-06-11 03:55:53.757537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.602 [2024-06-11 03:55:53.757543] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.602 [2024-06-11 03:55:53.757549] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.602 [2024-06-11 03:55:53.757563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.602 qpair failed and we were unable to recover it. 00:59:12.602 [2024-06-11 03:55:53.767499] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.602 [2024-06-11 03:55:53.767589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.602 [2024-06-11 03:55:53.767605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.602 [2024-06-11 03:55:53.767612] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.602 [2024-06-11 03:55:53.767617] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.602 [2024-06-11 03:55:53.767632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.602 qpair failed and we were unable to recover it. 00:59:12.602 [2024-06-11 03:55:53.777570] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.603 [2024-06-11 03:55:53.777653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.603 [2024-06-11 03:55:53.777667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.603 [2024-06-11 03:55:53.777677] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.603 [2024-06-11 03:55:53.777683] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.603 [2024-06-11 03:55:53.777697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.603 qpair failed and we were unable to recover it. 00:59:12.603 [2024-06-11 03:55:53.787590] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.603 [2024-06-11 03:55:53.787647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.603 [2024-06-11 03:55:53.787663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.603 [2024-06-11 03:55:53.787669] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.603 [2024-06-11 03:55:53.787676] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.603 [2024-06-11 03:55:53.787691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.603 qpair failed and we were unable to recover it. 00:59:12.603 [2024-06-11 03:55:53.797581] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.603 [2024-06-11 03:55:53.797640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.603 [2024-06-11 03:55:53.797654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.603 [2024-06-11 03:55:53.797660] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.603 [2024-06-11 03:55:53.797666] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.603 [2024-06-11 03:55:53.797681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.603 qpair failed and we were unable to recover it. 00:59:12.603 [2024-06-11 03:55:53.807608] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.603 [2024-06-11 03:55:53.807665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.603 [2024-06-11 03:55:53.807679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.603 [2024-06-11 03:55:53.807686] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.603 [2024-06-11 03:55:53.807692] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.603 [2024-06-11 03:55:53.807706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.603 qpair failed and we were unable to recover it. 00:59:12.603 [2024-06-11 03:55:53.817678] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.603 [2024-06-11 03:55:53.817741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.603 [2024-06-11 03:55:53.817755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.603 [2024-06-11 03:55:53.817761] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.603 [2024-06-11 03:55:53.817767] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.603 [2024-06-11 03:55:53.817781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.603 qpair failed and we were unable to recover it. 00:59:12.603 [2024-06-11 03:55:53.827653] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.603 [2024-06-11 03:55:53.827715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.603 [2024-06-11 03:55:53.827730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.603 [2024-06-11 03:55:53.827736] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.603 [2024-06-11 03:55:53.827742] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.603 [2024-06-11 03:55:53.827756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.603 qpair failed and we were unable to recover it. 00:59:12.603 [2024-06-11 03:55:53.837728] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.603 [2024-06-11 03:55:53.837782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.603 [2024-06-11 03:55:53.837796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.603 [2024-06-11 03:55:53.837803] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.603 [2024-06-11 03:55:53.837809] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.603 [2024-06-11 03:55:53.837824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.603 qpair failed and we were unable to recover it. 00:59:12.603 [2024-06-11 03:55:53.847754] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.603 [2024-06-11 03:55:53.847816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.603 [2024-06-11 03:55:53.847830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.603 [2024-06-11 03:55:53.847837] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.603 [2024-06-11 03:55:53.847843] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.603 [2024-06-11 03:55:53.847858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.603 qpair failed and we were unable to recover it. 00:59:12.603 [2024-06-11 03:55:53.857753] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.603 [2024-06-11 03:55:53.857819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.603 [2024-06-11 03:55:53.857834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.603 [2024-06-11 03:55:53.857840] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.603 [2024-06-11 03:55:53.857847] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.603 [2024-06-11 03:55:53.857861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.603 qpair failed and we were unable to recover it. 00:59:12.603 [2024-06-11 03:55:53.867797] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.603 [2024-06-11 03:55:53.867900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.603 [2024-06-11 03:55:53.867915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.603 [2024-06-11 03:55:53.867925] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.603 [2024-06-11 03:55:53.867931] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.603 [2024-06-11 03:55:53.867946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.603 qpair failed and we were unable to recover it. 00:59:12.603 [2024-06-11 03:55:53.877835] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.603 [2024-06-11 03:55:53.877942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.603 [2024-06-11 03:55:53.877957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.603 [2024-06-11 03:55:53.877964] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.603 [2024-06-11 03:55:53.877971] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.603 [2024-06-11 03:55:53.877985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.603 qpair failed and we were unable to recover it. 00:59:12.603 [2024-06-11 03:55:53.887847] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.603 [2024-06-11 03:55:53.887905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.603 [2024-06-11 03:55:53.887920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.603 [2024-06-11 03:55:53.887927] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.603 [2024-06-11 03:55:53.887933] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.603 [2024-06-11 03:55:53.887947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.603 qpair failed and we were unable to recover it. 00:59:12.603 [2024-06-11 03:55:53.897913] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.603 [2024-06-11 03:55:53.897981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.603 [2024-06-11 03:55:53.897995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.603 [2024-06-11 03:55:53.898002] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.603 [2024-06-11 03:55:53.898008] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.603 [2024-06-11 03:55:53.898027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.603 qpair failed and we were unable to recover it. 00:59:12.603 [2024-06-11 03:55:53.907968] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.603 [2024-06-11 03:55:53.908035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.603 [2024-06-11 03:55:53.908050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.603 [2024-06-11 03:55:53.908056] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.604 [2024-06-11 03:55:53.908062] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.604 [2024-06-11 03:55:53.908077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.604 qpair failed and we were unable to recover it. 00:59:12.604 [2024-06-11 03:55:53.917905] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.604 [2024-06-11 03:55:53.917962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.604 [2024-06-11 03:55:53.917976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.604 [2024-06-11 03:55:53.917982] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.604 [2024-06-11 03:55:53.917989] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.604 [2024-06-11 03:55:53.918003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.604 qpair failed and we were unable to recover it. 00:59:12.604 [2024-06-11 03:55:53.927934] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.604 [2024-06-11 03:55:53.927993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.604 [2024-06-11 03:55:53.928008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.604 [2024-06-11 03:55:53.928018] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.604 [2024-06-11 03:55:53.928024] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.604 [2024-06-11 03:55:53.928038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.604 qpair failed and we were unable to recover it. 00:59:12.604 [2024-06-11 03:55:53.937985] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.604 [2024-06-11 03:55:53.938046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.604 [2024-06-11 03:55:53.938061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.604 [2024-06-11 03:55:53.938067] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.604 [2024-06-11 03:55:53.938074] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.604 [2024-06-11 03:55:53.938087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.604 qpair failed and we were unable to recover it. 00:59:12.604 [2024-06-11 03:55:53.947989] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.604 [2024-06-11 03:55:53.948053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.604 [2024-06-11 03:55:53.948067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.604 [2024-06-11 03:55:53.948074] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.604 [2024-06-11 03:55:53.948080] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.604 [2024-06-11 03:55:53.948094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.604 qpair failed and we were unable to recover it. 00:59:12.604 [2024-06-11 03:55:53.957985] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.604 [2024-06-11 03:55:53.958063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.604 [2024-06-11 03:55:53.958082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.604 [2024-06-11 03:55:53.958089] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.604 [2024-06-11 03:55:53.958097] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.604 [2024-06-11 03:55:53.958112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.604 qpair failed and we were unable to recover it. 00:59:12.604 [2024-06-11 03:55:53.968087] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.604 [2024-06-11 03:55:53.968147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.604 [2024-06-11 03:55:53.968161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.604 [2024-06-11 03:55:53.968168] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.604 [2024-06-11 03:55:53.968174] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.604 [2024-06-11 03:55:53.968189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.604 qpair failed and we were unable to recover it. 00:59:12.604 [2024-06-11 03:55:53.978061] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.604 [2024-06-11 03:55:53.978137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.604 [2024-06-11 03:55:53.978152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.604 [2024-06-11 03:55:53.978159] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.604 [2024-06-11 03:55:53.978165] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.604 [2024-06-11 03:55:53.978180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.604 qpair failed and we were unable to recover it. 00:59:12.604 [2024-06-11 03:55:53.988129] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.604 [2024-06-11 03:55:53.988204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.604 [2024-06-11 03:55:53.988219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.604 [2024-06-11 03:55:53.988226] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.604 [2024-06-11 03:55:53.988232] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.604 [2024-06-11 03:55:53.988247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.604 qpair failed and we were unable to recover it. 00:59:12.604 [2024-06-11 03:55:53.998144] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.604 [2024-06-11 03:55:53.998200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.604 [2024-06-11 03:55:53.998214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.604 [2024-06-11 03:55:53.998221] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.604 [2024-06-11 03:55:53.998227] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.604 [2024-06-11 03:55:53.998245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.604 qpair failed and we were unable to recover it. 00:59:12.863 [2024-06-11 03:55:54.008226] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.863 [2024-06-11 03:55:54.008332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.863 [2024-06-11 03:55:54.008347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.863 [2024-06-11 03:55:54.008354] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.863 [2024-06-11 03:55:54.008360] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.863 [2024-06-11 03:55:54.008374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.863 qpair failed and we were unable to recover it. 00:59:12.863 [2024-06-11 03:55:54.018216] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.863 [2024-06-11 03:55:54.018283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.863 [2024-06-11 03:55:54.018297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.863 [2024-06-11 03:55:54.018304] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.863 [2024-06-11 03:55:54.018310] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.863 [2024-06-11 03:55:54.018324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.863 qpair failed and we were unable to recover it. 00:59:12.863 [2024-06-11 03:55:54.028311] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.863 [2024-06-11 03:55:54.028391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.863 [2024-06-11 03:55:54.028406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.863 [2024-06-11 03:55:54.028413] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.863 [2024-06-11 03:55:54.028419] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.863 [2024-06-11 03:55:54.028433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.863 qpair failed and we were unable to recover it. 00:59:12.863 [2024-06-11 03:55:54.038315] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.863 [2024-06-11 03:55:54.038373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.863 [2024-06-11 03:55:54.038387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.863 [2024-06-11 03:55:54.038394] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.863 [2024-06-11 03:55:54.038400] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.863 [2024-06-11 03:55:54.038414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.863 qpair failed and we were unable to recover it. 00:59:12.864 [2024-06-11 03:55:54.048305] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.864 [2024-06-11 03:55:54.048383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.864 [2024-06-11 03:55:54.048402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.864 [2024-06-11 03:55:54.048409] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.864 [2024-06-11 03:55:54.048414] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.864 [2024-06-11 03:55:54.048429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.864 qpair failed and we were unable to recover it. 00:59:12.864 [2024-06-11 03:55:54.058314] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.864 [2024-06-11 03:55:54.058374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.864 [2024-06-11 03:55:54.058388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.864 [2024-06-11 03:55:54.058394] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.864 [2024-06-11 03:55:54.058400] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.864 [2024-06-11 03:55:54.058414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.864 qpair failed and we were unable to recover it. 00:59:12.864 [2024-06-11 03:55:54.068389] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.864 [2024-06-11 03:55:54.068454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.864 [2024-06-11 03:55:54.068468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.864 [2024-06-11 03:55:54.068475] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.864 [2024-06-11 03:55:54.068480] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.864 [2024-06-11 03:55:54.068494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.864 qpair failed and we were unable to recover it. 00:59:12.864 [2024-06-11 03:55:54.078360] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.864 [2024-06-11 03:55:54.078415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.864 [2024-06-11 03:55:54.078430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.864 [2024-06-11 03:55:54.078436] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.864 [2024-06-11 03:55:54.078442] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.864 [2024-06-11 03:55:54.078457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.864 qpair failed and we were unable to recover it. 00:59:12.864 [2024-06-11 03:55:54.088343] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.864 [2024-06-11 03:55:54.088400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.864 [2024-06-11 03:55:54.088414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.864 [2024-06-11 03:55:54.088421] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.864 [2024-06-11 03:55:54.088430] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.864 [2024-06-11 03:55:54.088444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.864 qpair failed and we were unable to recover it. 00:59:12.864 [2024-06-11 03:55:54.098439] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.864 [2024-06-11 03:55:54.098501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.864 [2024-06-11 03:55:54.098515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.864 [2024-06-11 03:55:54.098521] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.864 [2024-06-11 03:55:54.098528] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.864 [2024-06-11 03:55:54.098542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.864 qpair failed and we were unable to recover it. 00:59:12.864 [2024-06-11 03:55:54.108475] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.864 [2024-06-11 03:55:54.108540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.864 [2024-06-11 03:55:54.108554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.864 [2024-06-11 03:55:54.108561] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.864 [2024-06-11 03:55:54.108567] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.864 [2024-06-11 03:55:54.108581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.864 qpair failed and we were unable to recover it. 00:59:12.864 [2024-06-11 03:55:54.118541] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.864 [2024-06-11 03:55:54.118647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.864 [2024-06-11 03:55:54.118662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.864 [2024-06-11 03:55:54.118669] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.864 [2024-06-11 03:55:54.118675] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.864 [2024-06-11 03:55:54.118689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.864 qpair failed and we were unable to recover it. 00:59:12.864 [2024-06-11 03:55:54.128557] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.864 [2024-06-11 03:55:54.128615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.864 [2024-06-11 03:55:54.128629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.864 [2024-06-11 03:55:54.128636] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.864 [2024-06-11 03:55:54.128642] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.864 [2024-06-11 03:55:54.128656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.864 qpair failed and we were unable to recover it. 00:59:12.864 [2024-06-11 03:55:54.138565] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.864 [2024-06-11 03:55:54.138630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.864 [2024-06-11 03:55:54.138645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.864 [2024-06-11 03:55:54.138652] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.864 [2024-06-11 03:55:54.138658] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.864 [2024-06-11 03:55:54.138672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.864 qpair failed and we were unable to recover it. 00:59:12.864 [2024-06-11 03:55:54.148621] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.864 [2024-06-11 03:55:54.148685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.864 [2024-06-11 03:55:54.148699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.864 [2024-06-11 03:55:54.148705] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.864 [2024-06-11 03:55:54.148711] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.864 [2024-06-11 03:55:54.148726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.864 qpair failed and we were unable to recover it. 00:59:12.864 [2024-06-11 03:55:54.158652] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.864 [2024-06-11 03:55:54.158709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.864 [2024-06-11 03:55:54.158723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.864 [2024-06-11 03:55:54.158729] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.864 [2024-06-11 03:55:54.158736] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.864 [2024-06-11 03:55:54.158750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.864 qpair failed and we were unable to recover it. 00:59:12.864 [2024-06-11 03:55:54.168651] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.864 [2024-06-11 03:55:54.168723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.864 [2024-06-11 03:55:54.168738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.864 [2024-06-11 03:55:54.168745] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.864 [2024-06-11 03:55:54.168751] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.864 [2024-06-11 03:55:54.168765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.864 qpair failed and we were unable to recover it. 00:59:12.864 [2024-06-11 03:55:54.178645] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.864 [2024-06-11 03:55:54.178707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.864 [2024-06-11 03:55:54.178722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.865 [2024-06-11 03:55:54.178730] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.865 [2024-06-11 03:55:54.178740] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.865 [2024-06-11 03:55:54.178755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.865 qpair failed and we were unable to recover it. 00:59:12.865 [2024-06-11 03:55:54.188712] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.865 [2024-06-11 03:55:54.188787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.865 [2024-06-11 03:55:54.188805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.865 [2024-06-11 03:55:54.188812] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.865 [2024-06-11 03:55:54.188818] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.865 [2024-06-11 03:55:54.188833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.865 qpair failed and we were unable to recover it. 00:59:12.865 [2024-06-11 03:55:54.198652] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.865 [2024-06-11 03:55:54.198714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.865 [2024-06-11 03:55:54.198729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.865 [2024-06-11 03:55:54.198735] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.865 [2024-06-11 03:55:54.198741] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.865 [2024-06-11 03:55:54.198755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.865 qpair failed and we were unable to recover it. 00:59:12.865 [2024-06-11 03:55:54.208771] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.865 [2024-06-11 03:55:54.208863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.865 [2024-06-11 03:55:54.208879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.865 [2024-06-11 03:55:54.208886] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.865 [2024-06-11 03:55:54.208891] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.865 [2024-06-11 03:55:54.208906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.865 qpair failed and we were unable to recover it. 00:59:12.865 [2024-06-11 03:55:54.218762] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.865 [2024-06-11 03:55:54.218841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.865 [2024-06-11 03:55:54.218855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.865 [2024-06-11 03:55:54.218862] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.865 [2024-06-11 03:55:54.218868] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.865 [2024-06-11 03:55:54.218882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.865 qpair failed and we were unable to recover it. 00:59:12.865 [2024-06-11 03:55:54.228828] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.865 [2024-06-11 03:55:54.228890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.865 [2024-06-11 03:55:54.228904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.865 [2024-06-11 03:55:54.228910] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.865 [2024-06-11 03:55:54.228916] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.865 [2024-06-11 03:55:54.228930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.865 qpair failed and we were unable to recover it. 00:59:12.865 [2024-06-11 03:55:54.238786] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.865 [2024-06-11 03:55:54.238845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.865 [2024-06-11 03:55:54.238859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.865 [2024-06-11 03:55:54.238865] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.865 [2024-06-11 03:55:54.238871] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.865 [2024-06-11 03:55:54.238885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.865 qpair failed and we were unable to recover it. 00:59:12.865 [2024-06-11 03:55:54.248839] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.865 [2024-06-11 03:55:54.248902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.865 [2024-06-11 03:55:54.248917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.865 [2024-06-11 03:55:54.248924] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.865 [2024-06-11 03:55:54.248930] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.865 [2024-06-11 03:55:54.248945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.865 qpair failed and we were unable to recover it. 00:59:12.865 [2024-06-11 03:55:54.258916] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:12.865 [2024-06-11 03:55:54.258975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:12.865 [2024-06-11 03:55:54.258991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:12.865 [2024-06-11 03:55:54.258997] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:12.865 [2024-06-11 03:55:54.259004] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b0000b90 00:59:12.865 [2024-06-11 03:55:54.259023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:59:12.865 qpair failed and we were unable to recover it. 00:59:13.124 [2024-06-11 03:55:54.268922] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.124 [2024-06-11 03:55:54.269015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.124 [2024-06-11 03:55:54.269047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.124 [2024-06-11 03:55:54.269065] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.124 [2024-06-11 03:55:54.269076] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.124 [2024-06-11 03:55:54.269101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.124 qpair failed and we were unable to recover it. 00:59:13.124 [2024-06-11 03:55:54.278998] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.124 [2024-06-11 03:55:54.279073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.124 [2024-06-11 03:55:54.279091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.124 [2024-06-11 03:55:54.279099] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.124 [2024-06-11 03:55:54.279105] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.124 [2024-06-11 03:55:54.279121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.124 qpair failed and we were unable to recover it. 00:59:13.124 [2024-06-11 03:55:54.289030] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.124 [2024-06-11 03:55:54.289141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.124 [2024-06-11 03:55:54.289158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.124 [2024-06-11 03:55:54.289165] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.124 [2024-06-11 03:55:54.289171] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.124 [2024-06-11 03:55:54.289186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.124 qpair failed and we were unable to recover it. 00:59:13.124 [2024-06-11 03:55:54.299039] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.124 [2024-06-11 03:55:54.299106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.124 [2024-06-11 03:55:54.299123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.124 [2024-06-11 03:55:54.299130] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.124 [2024-06-11 03:55:54.299136] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.124 [2024-06-11 03:55:54.299151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.124 qpair failed and we were unable to recover it. 00:59:13.124 [2024-06-11 03:55:54.309013] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.124 [2024-06-11 03:55:54.309079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.124 [2024-06-11 03:55:54.309095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.124 [2024-06-11 03:55:54.309102] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.124 [2024-06-11 03:55:54.309109] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.124 [2024-06-11 03:55:54.309124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.124 qpair failed and we were unable to recover it. 00:59:13.124 [2024-06-11 03:55:54.319108] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.124 [2024-06-11 03:55:54.319170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.124 [2024-06-11 03:55:54.319186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.124 [2024-06-11 03:55:54.319193] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.124 [2024-06-11 03:55:54.319199] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.124 [2024-06-11 03:55:54.319214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.124 qpair failed and we were unable to recover it. 00:59:13.124 [2024-06-11 03:55:54.329109] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.124 [2024-06-11 03:55:54.329172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.124 [2024-06-11 03:55:54.329187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.124 [2024-06-11 03:55:54.329194] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.124 [2024-06-11 03:55:54.329201] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.124 [2024-06-11 03:55:54.329216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.124 qpair failed and we were unable to recover it. 00:59:13.124 [2024-06-11 03:55:54.339108] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.124 [2024-06-11 03:55:54.339174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.124 [2024-06-11 03:55:54.339189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.124 [2024-06-11 03:55:54.339196] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.124 [2024-06-11 03:55:54.339202] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.124 [2024-06-11 03:55:54.339216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.124 qpair failed and we were unable to recover it. 00:59:13.124 [2024-06-11 03:55:54.349174] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.124 [2024-06-11 03:55:54.349237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.125 [2024-06-11 03:55:54.349254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.125 [2024-06-11 03:55:54.349260] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.125 [2024-06-11 03:55:54.349266] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.125 [2024-06-11 03:55:54.349281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.125 qpair failed and we were unable to recover it. 00:59:13.125 [2024-06-11 03:55:54.359206] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.125 [2024-06-11 03:55:54.359268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.125 [2024-06-11 03:55:54.359283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.125 [2024-06-11 03:55:54.359293] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.125 [2024-06-11 03:55:54.359299] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.125 [2024-06-11 03:55:54.359314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.125 qpair failed and we were unable to recover it. 00:59:13.125 [2024-06-11 03:55:54.369188] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.125 [2024-06-11 03:55:54.369253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.125 [2024-06-11 03:55:54.369269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.125 [2024-06-11 03:55:54.369276] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.125 [2024-06-11 03:55:54.369283] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.125 [2024-06-11 03:55:54.369297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.125 qpair failed and we were unable to recover it. 00:59:13.125 [2024-06-11 03:55:54.379310] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.125 [2024-06-11 03:55:54.379375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.125 [2024-06-11 03:55:54.379390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.125 [2024-06-11 03:55:54.379397] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.125 [2024-06-11 03:55:54.379403] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.125 [2024-06-11 03:55:54.379417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.125 qpair failed and we were unable to recover it. 00:59:13.125 [2024-06-11 03:55:54.389289] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.125 [2024-06-11 03:55:54.389345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.125 [2024-06-11 03:55:54.389361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.125 [2024-06-11 03:55:54.389368] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.125 [2024-06-11 03:55:54.389374] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.125 [2024-06-11 03:55:54.389388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.125 qpair failed and we were unable to recover it. 00:59:13.125 [2024-06-11 03:55:54.399318] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.125 [2024-06-11 03:55:54.399381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.125 [2024-06-11 03:55:54.399396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.125 [2024-06-11 03:55:54.399403] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.125 [2024-06-11 03:55:54.399409] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.125 [2024-06-11 03:55:54.399424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.125 qpair failed and we were unable to recover it. 00:59:13.125 [2024-06-11 03:55:54.409290] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.125 [2024-06-11 03:55:54.409377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.125 [2024-06-11 03:55:54.409392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.125 [2024-06-11 03:55:54.409399] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.125 [2024-06-11 03:55:54.409405] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.125 [2024-06-11 03:55:54.409420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.125 qpair failed and we were unable to recover it. 00:59:13.125 [2024-06-11 03:55:54.419355] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.125 [2024-06-11 03:55:54.419442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.125 [2024-06-11 03:55:54.419458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.125 [2024-06-11 03:55:54.419465] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.125 [2024-06-11 03:55:54.419471] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.125 [2024-06-11 03:55:54.419485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.125 qpair failed and we were unable to recover it. 00:59:13.125 [2024-06-11 03:55:54.429412] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.125 [2024-06-11 03:55:54.429476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.125 [2024-06-11 03:55:54.429492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.125 [2024-06-11 03:55:54.429499] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.125 [2024-06-11 03:55:54.429505] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.125 [2024-06-11 03:55:54.429519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.125 qpair failed and we were unable to recover it. 00:59:13.125 [2024-06-11 03:55:54.439448] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.125 [2024-06-11 03:55:54.439510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.125 [2024-06-11 03:55:54.439526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.125 [2024-06-11 03:55:54.439533] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.125 [2024-06-11 03:55:54.439539] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.125 [2024-06-11 03:55:54.439554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.125 qpair failed and we were unable to recover it. 00:59:13.125 [2024-06-11 03:55:54.449520] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.125 [2024-06-11 03:55:54.449582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.125 [2024-06-11 03:55:54.449602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.125 [2024-06-11 03:55:54.449609] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.125 [2024-06-11 03:55:54.449615] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.125 [2024-06-11 03:55:54.449630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.125 qpair failed and we were unable to recover it. 00:59:13.125 [2024-06-11 03:55:54.459489] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.125 [2024-06-11 03:55:54.459552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.125 [2024-06-11 03:55:54.459567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.125 [2024-06-11 03:55:54.459574] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.125 [2024-06-11 03:55:54.459580] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.125 [2024-06-11 03:55:54.459594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.125 qpair failed and we were unable to recover it. 00:59:13.125 [2024-06-11 03:55:54.469559] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.125 [2024-06-11 03:55:54.469619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.125 [2024-06-11 03:55:54.469634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.125 [2024-06-11 03:55:54.469641] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.125 [2024-06-11 03:55:54.469647] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.125 [2024-06-11 03:55:54.469661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.125 qpair failed and we were unable to recover it. 00:59:13.125 [2024-06-11 03:55:54.479612] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.125 [2024-06-11 03:55:54.479670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.125 [2024-06-11 03:55:54.479685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.125 [2024-06-11 03:55:54.479692] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.125 [2024-06-11 03:55:54.479698] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.125 [2024-06-11 03:55:54.479712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.126 qpair failed and we were unable to recover it. 00:59:13.126 [2024-06-11 03:55:54.489598] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.126 [2024-06-11 03:55:54.489659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.126 [2024-06-11 03:55:54.489675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.126 [2024-06-11 03:55:54.489682] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.126 [2024-06-11 03:55:54.489688] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.126 [2024-06-11 03:55:54.489705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.126 qpair failed and we were unable to recover it. 00:59:13.126 [2024-06-11 03:55:54.499595] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.126 [2024-06-11 03:55:54.499671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.126 [2024-06-11 03:55:54.499688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.126 [2024-06-11 03:55:54.499695] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.126 [2024-06-11 03:55:54.499702] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.126 [2024-06-11 03:55:54.499717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.126 qpair failed and we were unable to recover it. 00:59:13.126 [2024-06-11 03:55:54.509620] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.126 [2024-06-11 03:55:54.509682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.126 [2024-06-11 03:55:54.509698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.126 [2024-06-11 03:55:54.509705] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.126 [2024-06-11 03:55:54.509711] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.126 [2024-06-11 03:55:54.509725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.126 qpair failed and we were unable to recover it. 00:59:13.126 [2024-06-11 03:55:54.519680] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.126 [2024-06-11 03:55:54.519742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.126 [2024-06-11 03:55:54.519757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.126 [2024-06-11 03:55:54.519764] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.126 [2024-06-11 03:55:54.519770] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.126 [2024-06-11 03:55:54.519785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.126 qpair failed and we were unable to recover it. 00:59:13.385 [2024-06-11 03:55:54.529717] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.385 [2024-06-11 03:55:54.529792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.385 [2024-06-11 03:55:54.529810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.385 [2024-06-11 03:55:54.529816] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.385 [2024-06-11 03:55:54.529822] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.385 [2024-06-11 03:55:54.529836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.385 qpair failed and we were unable to recover it. 00:59:13.385 [2024-06-11 03:55:54.539779] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.385 [2024-06-11 03:55:54.539934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.385 [2024-06-11 03:55:54.539953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.385 [2024-06-11 03:55:54.539960] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.385 [2024-06-11 03:55:54.539966] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.385 [2024-06-11 03:55:54.539981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.385 qpair failed and we were unable to recover it. 00:59:13.385 [2024-06-11 03:55:54.549698] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.385 [2024-06-11 03:55:54.549776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.385 [2024-06-11 03:55:54.549793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.385 [2024-06-11 03:55:54.549799] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.385 [2024-06-11 03:55:54.549806] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.385 [2024-06-11 03:55:54.549820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.385 qpair failed and we were unable to recover it. 00:59:13.385 [2024-06-11 03:55:54.559845] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.385 [2024-06-11 03:55:54.559952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.385 [2024-06-11 03:55:54.559969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.385 [2024-06-11 03:55:54.559975] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.385 [2024-06-11 03:55:54.559982] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.385 [2024-06-11 03:55:54.559997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.385 qpair failed and we were unable to recover it. 00:59:13.385 [2024-06-11 03:55:54.569830] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.385 [2024-06-11 03:55:54.569887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.385 [2024-06-11 03:55:54.569902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.385 [2024-06-11 03:55:54.569909] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.385 [2024-06-11 03:55:54.569915] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.385 [2024-06-11 03:55:54.569930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.385 qpair failed and we were unable to recover it. 00:59:13.385 [2024-06-11 03:55:54.579879] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.385 [2024-06-11 03:55:54.579980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.385 [2024-06-11 03:55:54.579997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.385 [2024-06-11 03:55:54.580003] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.385 [2024-06-11 03:55:54.580013] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.385 [2024-06-11 03:55:54.580032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.385 qpair failed and we were unable to recover it. 00:59:13.385 [2024-06-11 03:55:54.589867] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.385 [2024-06-11 03:55:54.589931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.385 [2024-06-11 03:55:54.589948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.385 [2024-06-11 03:55:54.589955] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.385 [2024-06-11 03:55:54.589961] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.385 [2024-06-11 03:55:54.589975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.385 qpair failed and we were unable to recover it. 00:59:13.385 [2024-06-11 03:55:54.599900] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.385 [2024-06-11 03:55:54.599968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.385 [2024-06-11 03:55:54.599983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.385 [2024-06-11 03:55:54.599990] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.385 [2024-06-11 03:55:54.599996] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.385 [2024-06-11 03:55:54.600015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.385 qpair failed and we were unable to recover it. 00:59:13.385 [2024-06-11 03:55:54.609974] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.385 [2024-06-11 03:55:54.610035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.385 [2024-06-11 03:55:54.610051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.385 [2024-06-11 03:55:54.610058] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.385 [2024-06-11 03:55:54.610065] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.385 [2024-06-11 03:55:54.610078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.385 qpair failed and we were unable to recover it. 00:59:13.385 [2024-06-11 03:55:54.619959] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.385 [2024-06-11 03:55:54.620217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.385 [2024-06-11 03:55:54.620234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.385 [2024-06-11 03:55:54.620241] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.385 [2024-06-11 03:55:54.620247] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.385 [2024-06-11 03:55:54.620262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.385 qpair failed and we were unable to recover it. 00:59:13.385 [2024-06-11 03:55:54.630023] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.385 [2024-06-11 03:55:54.630087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.385 [2024-06-11 03:55:54.630105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.385 [2024-06-11 03:55:54.630112] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.385 [2024-06-11 03:55:54.630118] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.385 [2024-06-11 03:55:54.630133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.386 qpair failed and we were unable to recover it. 00:59:13.386 [2024-06-11 03:55:54.640001] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.386 [2024-06-11 03:55:54.640063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.386 [2024-06-11 03:55:54.640079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.386 [2024-06-11 03:55:54.640086] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.386 [2024-06-11 03:55:54.640092] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.386 [2024-06-11 03:55:54.640106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.386 qpair failed and we were unable to recover it. 00:59:13.386 [2024-06-11 03:55:54.650048] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.386 [2024-06-11 03:55:54.650113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.386 [2024-06-11 03:55:54.650129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.386 [2024-06-11 03:55:54.650136] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.386 [2024-06-11 03:55:54.650142] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.386 [2024-06-11 03:55:54.650157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.386 qpair failed and we were unable to recover it. 00:59:13.386 [2024-06-11 03:55:54.660089] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.386 [2024-06-11 03:55:54.660188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.386 [2024-06-11 03:55:54.660205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.386 [2024-06-11 03:55:54.660212] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.386 [2024-06-11 03:55:54.660219] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.386 [2024-06-11 03:55:54.660234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.386 qpair failed and we were unable to recover it. 00:59:13.386 [2024-06-11 03:55:54.670122] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.386 [2024-06-11 03:55:54.670233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.386 [2024-06-11 03:55:54.670250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.386 [2024-06-11 03:55:54.670257] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.386 [2024-06-11 03:55:54.670264] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.386 [2024-06-11 03:55:54.670283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.386 qpair failed and we were unable to recover it. 00:59:13.386 [2024-06-11 03:55:54.680217] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.386 [2024-06-11 03:55:54.680300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.386 [2024-06-11 03:55:54.680316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.386 [2024-06-11 03:55:54.680323] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.386 [2024-06-11 03:55:54.680330] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.386 [2024-06-11 03:55:54.680344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.386 qpair failed and we were unable to recover it. 00:59:13.386 [2024-06-11 03:55:54.690205] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.386 [2024-06-11 03:55:54.690291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.386 [2024-06-11 03:55:54.690308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.386 [2024-06-11 03:55:54.690315] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.386 [2024-06-11 03:55:54.690322] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.386 [2024-06-11 03:55:54.690336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.386 qpair failed and we were unable to recover it. 00:59:13.386 [2024-06-11 03:55:54.700215] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.386 [2024-06-11 03:55:54.700320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.386 [2024-06-11 03:55:54.700336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.386 [2024-06-11 03:55:54.700343] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.386 [2024-06-11 03:55:54.700349] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.386 [2024-06-11 03:55:54.700365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.386 qpair failed and we were unable to recover it. 00:59:13.386 [2024-06-11 03:55:54.710252] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.386 [2024-06-11 03:55:54.710320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.386 [2024-06-11 03:55:54.710336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.386 [2024-06-11 03:55:54.710342] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.386 [2024-06-11 03:55:54.710348] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.386 [2024-06-11 03:55:54.710363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.386 qpair failed and we were unable to recover it. 00:59:13.386 [2024-06-11 03:55:54.720256] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.386 [2024-06-11 03:55:54.720344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.386 [2024-06-11 03:55:54.720363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.386 [2024-06-11 03:55:54.720370] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.386 [2024-06-11 03:55:54.720376] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.386 [2024-06-11 03:55:54.720390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.386 qpair failed and we were unable to recover it. 00:59:13.386 [2024-06-11 03:55:54.730262] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.386 [2024-06-11 03:55:54.730322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.386 [2024-06-11 03:55:54.730337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.386 [2024-06-11 03:55:54.730344] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.386 [2024-06-11 03:55:54.730350] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.386 [2024-06-11 03:55:54.730364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.386 qpair failed and we were unable to recover it. 00:59:13.386 [2024-06-11 03:55:54.740323] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.386 [2024-06-11 03:55:54.740396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.386 [2024-06-11 03:55:54.740434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.386 [2024-06-11 03:55:54.740441] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.386 [2024-06-11 03:55:54.740447] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.386 [2024-06-11 03:55:54.740461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.386 qpair failed and we were unable to recover it. 00:59:13.386 [2024-06-11 03:55:54.750375] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.386 [2024-06-11 03:55:54.750437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.386 [2024-06-11 03:55:54.750453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.386 [2024-06-11 03:55:54.750459] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.386 [2024-06-11 03:55:54.750466] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.386 [2024-06-11 03:55:54.750480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.386 qpair failed and we were unable to recover it. 00:59:13.386 [2024-06-11 03:55:54.760358] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.386 [2024-06-11 03:55:54.760455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.386 [2024-06-11 03:55:54.760471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.386 [2024-06-11 03:55:54.760478] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.386 [2024-06-11 03:55:54.760487] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.386 [2024-06-11 03:55:54.760502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.386 qpair failed and we were unable to recover it. 00:59:13.386 [2024-06-11 03:55:54.770386] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.386 [2024-06-11 03:55:54.770449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.386 [2024-06-11 03:55:54.770464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.387 [2024-06-11 03:55:54.770471] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.387 [2024-06-11 03:55:54.770477] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.387 [2024-06-11 03:55:54.770491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.387 qpair failed and we were unable to recover it. 00:59:13.387 [2024-06-11 03:55:54.780432] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.387 [2024-06-11 03:55:54.780496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.387 [2024-06-11 03:55:54.780511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.387 [2024-06-11 03:55:54.780518] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.387 [2024-06-11 03:55:54.780524] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.387 [2024-06-11 03:55:54.780537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.387 qpair failed and we were unable to recover it. 00:59:13.646 [2024-06-11 03:55:54.790437] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.646 [2024-06-11 03:55:54.790505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.646 [2024-06-11 03:55:54.790521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.646 [2024-06-11 03:55:54.790528] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.646 [2024-06-11 03:55:54.790534] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.646 [2024-06-11 03:55:54.790549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.646 qpair failed and we were unable to recover it. 00:59:13.646 [2024-06-11 03:55:54.800486] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.646 [2024-06-11 03:55:54.800552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.646 [2024-06-11 03:55:54.800568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.646 [2024-06-11 03:55:54.800574] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.646 [2024-06-11 03:55:54.800580] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.646 [2024-06-11 03:55:54.800595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.646 qpair failed and we were unable to recover it. 00:59:13.646 [2024-06-11 03:55:54.810564] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.646 [2024-06-11 03:55:54.810671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.646 [2024-06-11 03:55:54.810687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.646 [2024-06-11 03:55:54.810694] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.646 [2024-06-11 03:55:54.810700] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.646 [2024-06-11 03:55:54.810714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.646 qpair failed and we were unable to recover it. 00:59:13.646 [2024-06-11 03:55:54.820577] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.646 [2024-06-11 03:55:54.820645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.646 [2024-06-11 03:55:54.820660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.646 [2024-06-11 03:55:54.820667] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.646 [2024-06-11 03:55:54.820673] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.646 [2024-06-11 03:55:54.820687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.646 qpair failed and we were unable to recover it. 00:59:13.646 [2024-06-11 03:55:54.830547] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.646 [2024-06-11 03:55:54.830659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.646 [2024-06-11 03:55:54.830676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.646 [2024-06-11 03:55:54.830682] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.646 [2024-06-11 03:55:54.830689] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.646 [2024-06-11 03:55:54.830702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.646 qpair failed and we were unable to recover it. 00:59:13.646 [2024-06-11 03:55:54.840583] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.646 [2024-06-11 03:55:54.840642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.646 [2024-06-11 03:55:54.840657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.646 [2024-06-11 03:55:54.840664] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.646 [2024-06-11 03:55:54.840670] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.646 [2024-06-11 03:55:54.840683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.646 qpair failed and we were unable to recover it. 00:59:13.646 [2024-06-11 03:55:54.850617] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.646 [2024-06-11 03:55:54.850682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.646 [2024-06-11 03:55:54.850698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.646 [2024-06-11 03:55:54.850705] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.646 [2024-06-11 03:55:54.850718] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.646 [2024-06-11 03:55:54.850733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.646 qpair failed and we were unable to recover it. 00:59:13.646 [2024-06-11 03:55:54.860680] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.646 [2024-06-11 03:55:54.860788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.646 [2024-06-11 03:55:54.860803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.646 [2024-06-11 03:55:54.860810] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.646 [2024-06-11 03:55:54.860816] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.646 [2024-06-11 03:55:54.860831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.646 qpair failed and we were unable to recover it. 00:59:13.646 [2024-06-11 03:55:54.870667] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.646 [2024-06-11 03:55:54.870732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.646 [2024-06-11 03:55:54.870747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.646 [2024-06-11 03:55:54.870754] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.646 [2024-06-11 03:55:54.870760] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.646 [2024-06-11 03:55:54.870774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.646 qpair failed and we were unable to recover it. 00:59:13.646 [2024-06-11 03:55:54.880768] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.646 [2024-06-11 03:55:54.880876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.646 [2024-06-11 03:55:54.880892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.646 [2024-06-11 03:55:54.880899] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.646 [2024-06-11 03:55:54.880905] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.646 [2024-06-11 03:55:54.880920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.646 qpair failed and we were unable to recover it. 00:59:13.646 [2024-06-11 03:55:54.890755] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.646 [2024-06-11 03:55:54.890827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.646 [2024-06-11 03:55:54.890843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.646 [2024-06-11 03:55:54.890850] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.646 [2024-06-11 03:55:54.890856] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.646 [2024-06-11 03:55:54.890870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.646 qpair failed and we were unable to recover it. 00:59:13.646 [2024-06-11 03:55:54.900861] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.646 [2024-06-11 03:55:54.900939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.647 [2024-06-11 03:55:54.900956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.647 [2024-06-11 03:55:54.900962] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.647 [2024-06-11 03:55:54.900969] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.647 [2024-06-11 03:55:54.900983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.647 qpair failed and we were unable to recover it. 00:59:13.647 [2024-06-11 03:55:54.910834] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.647 [2024-06-11 03:55:54.910892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.647 [2024-06-11 03:55:54.910907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.647 [2024-06-11 03:55:54.910914] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.647 [2024-06-11 03:55:54.910920] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.647 [2024-06-11 03:55:54.910933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.647 qpair failed and we were unable to recover it. 00:59:13.647 [2024-06-11 03:55:54.920845] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.647 [2024-06-11 03:55:54.920911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.647 [2024-06-11 03:55:54.920926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.647 [2024-06-11 03:55:54.920933] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.647 [2024-06-11 03:55:54.920939] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.647 [2024-06-11 03:55:54.920953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.647 qpair failed and we were unable to recover it. 00:59:13.647 [2024-06-11 03:55:54.930890] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.647 [2024-06-11 03:55:54.930952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.647 [2024-06-11 03:55:54.930967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.647 [2024-06-11 03:55:54.930974] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.647 [2024-06-11 03:55:54.930980] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.647 [2024-06-11 03:55:54.930994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.647 qpair failed and we were unable to recover it. 00:59:13.647 [2024-06-11 03:55:54.940857] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.647 [2024-06-11 03:55:54.940925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.647 [2024-06-11 03:55:54.940940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.647 [2024-06-11 03:55:54.940948] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.647 [2024-06-11 03:55:54.940961] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.647 [2024-06-11 03:55:54.940976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.647 qpair failed and we were unable to recover it. 00:59:13.647 [2024-06-11 03:55:54.950923] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.647 [2024-06-11 03:55:54.950982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.647 [2024-06-11 03:55:54.950997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.647 [2024-06-11 03:55:54.951004] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.647 [2024-06-11 03:55:54.951014] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.647 [2024-06-11 03:55:54.951029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.647 qpair failed and we were unable to recover it. 00:59:13.647 [2024-06-11 03:55:54.960935] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.647 [2024-06-11 03:55:54.960997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.647 [2024-06-11 03:55:54.961016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.647 [2024-06-11 03:55:54.961023] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.647 [2024-06-11 03:55:54.961030] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.647 [2024-06-11 03:55:54.961044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.647 qpair failed and we were unable to recover it. 00:59:13.647 [2024-06-11 03:55:54.970981] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.647 [2024-06-11 03:55:54.971055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.647 [2024-06-11 03:55:54.971073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.647 [2024-06-11 03:55:54.971081] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.647 [2024-06-11 03:55:54.971088] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.647 [2024-06-11 03:55:54.971102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.647 qpair failed and we were unable to recover it. 00:59:13.647 [2024-06-11 03:55:54.981052] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.647 [2024-06-11 03:55:54.981114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.647 [2024-06-11 03:55:54.981128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.647 [2024-06-11 03:55:54.981135] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.647 [2024-06-11 03:55:54.981141] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.647 [2024-06-11 03:55:54.981155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.647 qpair failed and we were unable to recover it. 00:59:13.647 [2024-06-11 03:55:54.991015] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.647 [2024-06-11 03:55:54.991079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.647 [2024-06-11 03:55:54.991094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.647 [2024-06-11 03:55:54.991100] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.647 [2024-06-11 03:55:54.991106] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.647 [2024-06-11 03:55:54.991120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.647 qpair failed and we were unable to recover it. 00:59:13.647 [2024-06-11 03:55:55.001053] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.647 [2024-06-11 03:55:55.001112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.647 [2024-06-11 03:55:55.001128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.647 [2024-06-11 03:55:55.001135] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.647 [2024-06-11 03:55:55.001141] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.647 [2024-06-11 03:55:55.001155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.647 qpair failed and we were unable to recover it. 00:59:13.647 [2024-06-11 03:55:55.011078] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.647 [2024-06-11 03:55:55.011153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.647 [2024-06-11 03:55:55.011171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.647 [2024-06-11 03:55:55.011178] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.647 [2024-06-11 03:55:55.011185] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.647 [2024-06-11 03:55:55.011200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.647 qpair failed and we were unable to recover it. 00:59:13.647 [2024-06-11 03:55:55.021189] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.647 [2024-06-11 03:55:55.021254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.647 [2024-06-11 03:55:55.021269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.647 [2024-06-11 03:55:55.021276] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.647 [2024-06-11 03:55:55.021281] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.647 [2024-06-11 03:55:55.021296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.647 qpair failed and we were unable to recover it. 00:59:13.647 [2024-06-11 03:55:55.031197] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.647 [2024-06-11 03:55:55.031304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.647 [2024-06-11 03:55:55.031320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.647 [2024-06-11 03:55:55.031327] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.647 [2024-06-11 03:55:55.031336] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.647 [2024-06-11 03:55:55.031351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.648 qpair failed and we were unable to recover it. 00:59:13.648 [2024-06-11 03:55:55.041170] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.648 [2024-06-11 03:55:55.041232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.648 [2024-06-11 03:55:55.041247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.648 [2024-06-11 03:55:55.041254] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.648 [2024-06-11 03:55:55.041260] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.648 [2024-06-11 03:55:55.041274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.648 qpair failed and we were unable to recover it. 00:59:13.907 [2024-06-11 03:55:55.051253] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.907 [2024-06-11 03:55:55.051356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.907 [2024-06-11 03:55:55.051374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.907 [2024-06-11 03:55:55.051381] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.907 [2024-06-11 03:55:55.051387] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.907 [2024-06-11 03:55:55.051402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.907 qpair failed and we were unable to recover it. 00:59:13.907 [2024-06-11 03:55:55.061254] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.907 [2024-06-11 03:55:55.061353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.907 [2024-06-11 03:55:55.061369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.907 [2024-06-11 03:55:55.061376] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.907 [2024-06-11 03:55:55.061382] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.907 [2024-06-11 03:55:55.061396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.907 qpair failed and we were unable to recover it. 00:59:13.907 [2024-06-11 03:55:55.071296] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.907 [2024-06-11 03:55:55.071377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.907 [2024-06-11 03:55:55.071393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.907 [2024-06-11 03:55:55.071400] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.907 [2024-06-11 03:55:55.071406] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb62e70 00:59:13.907 [2024-06-11 03:55:55.071420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:59:13.907 qpair failed and we were unable to recover it. 00:59:13.907 [2024-06-11 03:55:55.081329] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.907 [2024-06-11 03:55:55.081412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.907 [2024-06-11 03:55:55.081444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.907 [2024-06-11 03:55:55.081457] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.907 [2024-06-11 03:55:55.081468] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.907 [2024-06-11 03:55:55.081494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.907 qpair failed and we were unable to recover it. 00:59:13.907 [2024-06-11 03:55:55.091377] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.907 [2024-06-11 03:55:55.091441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.907 [2024-06-11 03:55:55.091458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.907 [2024-06-11 03:55:55.091466] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.907 [2024-06-11 03:55:55.091478] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.907 [2024-06-11 03:55:55.091494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.907 qpair failed and we were unable to recover it. 00:59:13.907 [2024-06-11 03:55:55.101320] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.907 [2024-06-11 03:55:55.101410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.907 [2024-06-11 03:55:55.101426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.907 [2024-06-11 03:55:55.101434] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.907 [2024-06-11 03:55:55.101440] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.907 [2024-06-11 03:55:55.101455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.907 qpair failed and we were unable to recover it. 00:59:13.907 [2024-06-11 03:55:55.111425] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.907 [2024-06-11 03:55:55.111496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.907 [2024-06-11 03:55:55.111511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.907 [2024-06-11 03:55:55.111518] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.907 [2024-06-11 03:55:55.111524] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.907 [2024-06-11 03:55:55.111542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.907 qpair failed and we were unable to recover it. 00:59:13.907 [2024-06-11 03:55:55.121442] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.907 [2024-06-11 03:55:55.121507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.907 [2024-06-11 03:55:55.121523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.907 [2024-06-11 03:55:55.121535] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.907 [2024-06-11 03:55:55.121542] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.907 [2024-06-11 03:55:55.121556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.907 qpair failed and we were unable to recover it. 00:59:13.907 [2024-06-11 03:55:55.131472] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.907 [2024-06-11 03:55:55.131535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.907 [2024-06-11 03:55:55.131550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.907 [2024-06-11 03:55:55.131556] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.907 [2024-06-11 03:55:55.131563] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.907 [2024-06-11 03:55:55.131577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.907 qpair failed and we were unable to recover it. 00:59:13.907 [2024-06-11 03:55:55.141522] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.907 [2024-06-11 03:55:55.141582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.907 [2024-06-11 03:55:55.141597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.907 [2024-06-11 03:55:55.141604] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.907 [2024-06-11 03:55:55.141610] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.907 [2024-06-11 03:55:55.141625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.907 qpair failed and we were unable to recover it. 00:59:13.907 [2024-06-11 03:55:55.151550] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.907 [2024-06-11 03:55:55.151643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.907 [2024-06-11 03:55:55.151659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.907 [2024-06-11 03:55:55.151666] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.907 [2024-06-11 03:55:55.151672] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.907 [2024-06-11 03:55:55.151686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.907 qpair failed and we were unable to recover it. 00:59:13.907 [2024-06-11 03:55:55.161558] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.907 [2024-06-11 03:55:55.161622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.907 [2024-06-11 03:55:55.161637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.907 [2024-06-11 03:55:55.161644] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.907 [2024-06-11 03:55:55.161650] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.907 [2024-06-11 03:55:55.161669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.907 qpair failed and we were unable to recover it. 00:59:13.907 [2024-06-11 03:55:55.171553] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.908 [2024-06-11 03:55:55.171615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.908 [2024-06-11 03:55:55.171630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.908 [2024-06-11 03:55:55.171637] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.908 [2024-06-11 03:55:55.171643] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.908 [2024-06-11 03:55:55.171658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.908 qpair failed and we were unable to recover it. 00:59:13.908 [2024-06-11 03:55:55.181595] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.908 [2024-06-11 03:55:55.181658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.908 [2024-06-11 03:55:55.181673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.908 [2024-06-11 03:55:55.181680] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.908 [2024-06-11 03:55:55.181687] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.908 [2024-06-11 03:55:55.181702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.908 qpair failed and we were unable to recover it. 00:59:13.908 [2024-06-11 03:55:55.191608] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.908 [2024-06-11 03:55:55.191670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.908 [2024-06-11 03:55:55.191685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.908 [2024-06-11 03:55:55.191692] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.908 [2024-06-11 03:55:55.191698] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.908 [2024-06-11 03:55:55.191714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.908 qpair failed and we were unable to recover it. 00:59:13.908 [2024-06-11 03:55:55.201644] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.908 [2024-06-11 03:55:55.201703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.908 [2024-06-11 03:55:55.201718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.908 [2024-06-11 03:55:55.201724] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.908 [2024-06-11 03:55:55.201730] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.908 [2024-06-11 03:55:55.201744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.908 qpair failed and we were unable to recover it. 00:59:13.908 [2024-06-11 03:55:55.211734] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.908 [2024-06-11 03:55:55.211796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.908 [2024-06-11 03:55:55.211814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.908 [2024-06-11 03:55:55.211821] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.908 [2024-06-11 03:55:55.211827] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.908 [2024-06-11 03:55:55.211842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.908 qpair failed and we were unable to recover it. 00:59:13.908 [2024-06-11 03:55:55.221732] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.908 [2024-06-11 03:55:55.221794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.908 [2024-06-11 03:55:55.221808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.908 [2024-06-11 03:55:55.221815] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.908 [2024-06-11 03:55:55.221821] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.908 [2024-06-11 03:55:55.221835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.908 qpair failed and we were unable to recover it. 00:59:13.908 [2024-06-11 03:55:55.231723] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.908 [2024-06-11 03:55:55.231787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.908 [2024-06-11 03:55:55.231801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.908 [2024-06-11 03:55:55.231808] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.908 [2024-06-11 03:55:55.231814] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.908 [2024-06-11 03:55:55.231828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.908 qpair failed and we were unable to recover it. 00:59:13.908 [2024-06-11 03:55:55.241748] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.908 [2024-06-11 03:55:55.241821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.908 [2024-06-11 03:55:55.241836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.908 [2024-06-11 03:55:55.241843] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.908 [2024-06-11 03:55:55.241850] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.908 [2024-06-11 03:55:55.241864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.908 qpair failed and we were unable to recover it. 00:59:13.908 [2024-06-11 03:55:55.251767] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.908 [2024-06-11 03:55:55.251826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.908 [2024-06-11 03:55:55.251841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.908 [2024-06-11 03:55:55.251848] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.908 [2024-06-11 03:55:55.251854] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.908 [2024-06-11 03:55:55.251874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.908 qpair failed and we were unable to recover it. 00:59:13.908 [2024-06-11 03:55:55.261832] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.908 [2024-06-11 03:55:55.261893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.908 [2024-06-11 03:55:55.261907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.908 [2024-06-11 03:55:55.261913] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.908 [2024-06-11 03:55:55.261920] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.908 [2024-06-11 03:55:55.261934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.908 qpair failed and we were unable to recover it. 00:59:13.908 [2024-06-11 03:55:55.271878] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.908 [2024-06-11 03:55:55.271949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.908 [2024-06-11 03:55:55.271966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.908 [2024-06-11 03:55:55.271973] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.908 [2024-06-11 03:55:55.271979] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.908 [2024-06-11 03:55:55.271993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.908 qpair failed and we were unable to recover it. 00:59:13.908 [2024-06-11 03:55:55.281865] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.908 [2024-06-11 03:55:55.281939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.908 [2024-06-11 03:55:55.281954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.908 [2024-06-11 03:55:55.281960] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.908 [2024-06-11 03:55:55.281967] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.908 [2024-06-11 03:55:55.281981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.908 qpair failed and we were unable to recover it. 00:59:13.908 [2024-06-11 03:55:55.291932] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.908 [2024-06-11 03:55:55.291992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.908 [2024-06-11 03:55:55.292007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.908 [2024-06-11 03:55:55.292017] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.908 [2024-06-11 03:55:55.292023] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.908 [2024-06-11 03:55:55.292038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.908 qpair failed and we were unable to recover it. 00:59:13.908 [2024-06-11 03:55:55.301909] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:13.908 [2024-06-11 03:55:55.302019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:13.908 [2024-06-11 03:55:55.302039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:13.908 [2024-06-11 03:55:55.302046] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:13.908 [2024-06-11 03:55:55.302052] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:13.909 [2024-06-11 03:55:55.302066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:13.909 qpair failed and we were unable to recover it. 00:59:14.168 [2024-06-11 03:55:55.312017] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.168 [2024-06-11 03:55:55.312129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.168 [2024-06-11 03:55:55.312144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.168 [2024-06-11 03:55:55.312150] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.168 [2024-06-11 03:55:55.312157] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.168 [2024-06-11 03:55:55.312171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.168 qpair failed and we were unable to recover it. 00:59:14.168 [2024-06-11 03:55:55.321989] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.168 [2024-06-11 03:55:55.322090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.168 [2024-06-11 03:55:55.322106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.168 [2024-06-11 03:55:55.322112] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.168 [2024-06-11 03:55:55.322118] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.168 [2024-06-11 03:55:55.322133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.168 qpair failed and we were unable to recover it. 00:59:14.168 [2024-06-11 03:55:55.332044] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.168 [2024-06-11 03:55:55.332105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.168 [2024-06-11 03:55:55.332120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.168 [2024-06-11 03:55:55.332126] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.168 [2024-06-11 03:55:55.332132] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.168 [2024-06-11 03:55:55.332147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.168 qpair failed and we were unable to recover it. 00:59:14.168 [2024-06-11 03:55:55.342039] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.168 [2024-06-11 03:55:55.342109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.168 [2024-06-11 03:55:55.342123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.168 [2024-06-11 03:55:55.342130] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.168 [2024-06-11 03:55:55.342136] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.168 [2024-06-11 03:55:55.342155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.168 qpair failed and we were unable to recover it. 00:59:14.168 [2024-06-11 03:55:55.352067] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.168 [2024-06-11 03:55:55.352128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.168 [2024-06-11 03:55:55.352143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.168 [2024-06-11 03:55:55.352150] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.168 [2024-06-11 03:55:55.352155] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.168 [2024-06-11 03:55:55.352170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.168 qpair failed and we were unable to recover it. 00:59:14.168 [2024-06-11 03:55:55.362125] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.168 [2024-06-11 03:55:55.362190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.168 [2024-06-11 03:55:55.362204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.168 [2024-06-11 03:55:55.362211] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.168 [2024-06-11 03:55:55.362217] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.168 [2024-06-11 03:55:55.362232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.168 qpair failed and we were unable to recover it. 00:59:14.168 [2024-06-11 03:55:55.372121] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.168 [2024-06-11 03:55:55.372181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.168 [2024-06-11 03:55:55.372195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.168 [2024-06-11 03:55:55.372201] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.168 [2024-06-11 03:55:55.372207] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.168 [2024-06-11 03:55:55.372222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.168 qpair failed and we were unable to recover it. 00:59:14.168 [2024-06-11 03:55:55.382206] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.168 [2024-06-11 03:55:55.382268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.168 [2024-06-11 03:55:55.382283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.168 [2024-06-11 03:55:55.382289] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.168 [2024-06-11 03:55:55.382295] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.168 [2024-06-11 03:55:55.382310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.168 qpair failed and we were unable to recover it. 00:59:14.168 [2024-06-11 03:55:55.392177] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.168 [2024-06-11 03:55:55.392247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.168 [2024-06-11 03:55:55.392262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.168 [2024-06-11 03:55:55.392268] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.168 [2024-06-11 03:55:55.392274] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.168 [2024-06-11 03:55:55.392288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.168 qpair failed and we were unable to recover it. 00:59:14.168 [2024-06-11 03:55:55.402217] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.168 [2024-06-11 03:55:55.402278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.168 [2024-06-11 03:55:55.402292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.168 [2024-06-11 03:55:55.402299] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.168 [2024-06-11 03:55:55.402305] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.168 [2024-06-11 03:55:55.402319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.168 qpair failed and we were unable to recover it. 00:59:14.168 [2024-06-11 03:55:55.412254] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.168 [2024-06-11 03:55:55.412337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.168 [2024-06-11 03:55:55.412352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.168 [2024-06-11 03:55:55.412359] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.168 [2024-06-11 03:55:55.412365] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.168 [2024-06-11 03:55:55.412379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.168 qpair failed and we were unable to recover it. 00:59:14.168 [2024-06-11 03:55:55.422275] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.168 [2024-06-11 03:55:55.422338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.168 [2024-06-11 03:55:55.422352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.169 [2024-06-11 03:55:55.422359] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.169 [2024-06-11 03:55:55.422366] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.169 [2024-06-11 03:55:55.422380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.169 qpair failed and we were unable to recover it. 00:59:14.169 [2024-06-11 03:55:55.432286] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.169 [2024-06-11 03:55:55.432349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.169 [2024-06-11 03:55:55.432364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.169 [2024-06-11 03:55:55.432370] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.169 [2024-06-11 03:55:55.432380] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.169 [2024-06-11 03:55:55.432394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.169 qpair failed and we were unable to recover it. 00:59:14.169 [2024-06-11 03:55:55.442333] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.169 [2024-06-11 03:55:55.442393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.169 [2024-06-11 03:55:55.442407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.169 [2024-06-11 03:55:55.442414] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.169 [2024-06-11 03:55:55.442420] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.169 [2024-06-11 03:55:55.442435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.169 qpair failed and we were unable to recover it. 00:59:14.169 [2024-06-11 03:55:55.452411] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.169 [2024-06-11 03:55:55.452490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.169 [2024-06-11 03:55:55.452505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.169 [2024-06-11 03:55:55.452512] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.169 [2024-06-11 03:55:55.452517] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.169 [2024-06-11 03:55:55.452532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.169 qpair failed and we were unable to recover it. 00:59:14.169 [2024-06-11 03:55:55.462438] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.169 [2024-06-11 03:55:55.462512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.169 [2024-06-11 03:55:55.462527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.169 [2024-06-11 03:55:55.462534] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.169 [2024-06-11 03:55:55.462540] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.169 [2024-06-11 03:55:55.462554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.169 qpair failed and we were unable to recover it. 00:59:14.169 [2024-06-11 03:55:55.472472] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.169 [2024-06-11 03:55:55.472583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.169 [2024-06-11 03:55:55.472598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.169 [2024-06-11 03:55:55.472605] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.169 [2024-06-11 03:55:55.472611] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.169 [2024-06-11 03:55:55.472626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.169 qpair failed and we were unable to recover it. 00:59:14.169 [2024-06-11 03:55:55.482454] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.169 [2024-06-11 03:55:55.482508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.169 [2024-06-11 03:55:55.482522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.169 [2024-06-11 03:55:55.482529] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.169 [2024-06-11 03:55:55.482534] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.169 [2024-06-11 03:55:55.482549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.169 qpair failed and we were unable to recover it. 00:59:14.169 [2024-06-11 03:55:55.492487] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.169 [2024-06-11 03:55:55.492542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.169 [2024-06-11 03:55:55.492556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.169 [2024-06-11 03:55:55.492563] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.169 [2024-06-11 03:55:55.492570] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.169 [2024-06-11 03:55:55.492584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.169 qpair failed and we were unable to recover it. 00:59:14.169 [2024-06-11 03:55:55.502563] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.169 [2024-06-11 03:55:55.502624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.169 [2024-06-11 03:55:55.502639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.169 [2024-06-11 03:55:55.502645] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.169 [2024-06-11 03:55:55.502651] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.169 [2024-06-11 03:55:55.502665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.169 qpair failed and we were unable to recover it. 00:59:14.169 [2024-06-11 03:55:55.512532] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.169 [2024-06-11 03:55:55.512589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.169 [2024-06-11 03:55:55.512603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.169 [2024-06-11 03:55:55.512609] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.169 [2024-06-11 03:55:55.512615] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.169 [2024-06-11 03:55:55.512630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.169 qpair failed and we were unable to recover it. 00:59:14.169 [2024-06-11 03:55:55.522567] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.169 [2024-06-11 03:55:55.522628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.169 [2024-06-11 03:55:55.522642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.169 [2024-06-11 03:55:55.522651] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.169 [2024-06-11 03:55:55.522658] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.169 [2024-06-11 03:55:55.522671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.169 qpair failed and we were unable to recover it. 00:59:14.169 [2024-06-11 03:55:55.532606] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.169 [2024-06-11 03:55:55.532661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.169 [2024-06-11 03:55:55.532675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.169 [2024-06-11 03:55:55.532682] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.169 [2024-06-11 03:55:55.532688] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.169 [2024-06-11 03:55:55.532702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.169 qpair failed and we were unable to recover it. 00:59:14.169 [2024-06-11 03:55:55.542636] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.169 [2024-06-11 03:55:55.542699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.169 [2024-06-11 03:55:55.542714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.169 [2024-06-11 03:55:55.542721] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.169 [2024-06-11 03:55:55.542727] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.169 [2024-06-11 03:55:55.542741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.169 qpair failed and we were unable to recover it. 00:59:14.169 [2024-06-11 03:55:55.552707] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.169 [2024-06-11 03:55:55.552776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.169 [2024-06-11 03:55:55.552791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.169 [2024-06-11 03:55:55.552797] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.169 [2024-06-11 03:55:55.552803] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.169 [2024-06-11 03:55:55.552819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.170 qpair failed and we were unable to recover it. 00:59:14.170 [2024-06-11 03:55:55.562707] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.170 [2024-06-11 03:55:55.562766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.170 [2024-06-11 03:55:55.562780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.170 [2024-06-11 03:55:55.562787] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.170 [2024-06-11 03:55:55.562793] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.170 [2024-06-11 03:55:55.562807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.170 qpair failed and we were unable to recover it. 00:59:14.428 [2024-06-11 03:55:55.572773] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.429 [2024-06-11 03:55:55.572839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.429 [2024-06-11 03:55:55.572853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.429 [2024-06-11 03:55:55.572860] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.429 [2024-06-11 03:55:55.572865] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.429 [2024-06-11 03:55:55.572879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.429 qpair failed and we were unable to recover it. 00:59:14.429 [2024-06-11 03:55:55.582703] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.429 [2024-06-11 03:55:55.582775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.429 [2024-06-11 03:55:55.582790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.429 [2024-06-11 03:55:55.582797] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.429 [2024-06-11 03:55:55.582803] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.429 [2024-06-11 03:55:55.582817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.429 qpair failed and we were unable to recover it. 00:59:14.429 [2024-06-11 03:55:55.592788] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.429 [2024-06-11 03:55:55.592866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.429 [2024-06-11 03:55:55.592882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.429 [2024-06-11 03:55:55.592888] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.429 [2024-06-11 03:55:55.592894] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.429 [2024-06-11 03:55:55.592908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.429 qpair failed and we were unable to recover it. 00:59:14.429 [2024-06-11 03:55:55.602820] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.429 [2024-06-11 03:55:55.602890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.429 [2024-06-11 03:55:55.602910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.429 [2024-06-11 03:55:55.602917] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.429 [2024-06-11 03:55:55.602923] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.429 [2024-06-11 03:55:55.602938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.429 qpair failed and we were unable to recover it. 00:59:14.429 [2024-06-11 03:55:55.612858] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.429 [2024-06-11 03:55:55.612921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.429 [2024-06-11 03:55:55.612935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.429 [2024-06-11 03:55:55.612945] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.429 [2024-06-11 03:55:55.612951] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.429 [2024-06-11 03:55:55.612965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.429 qpair failed and we were unable to recover it. 00:59:14.429 [2024-06-11 03:55:55.622878] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.429 [2024-06-11 03:55:55.622944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.429 [2024-06-11 03:55:55.622959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.429 [2024-06-11 03:55:55.622965] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.429 [2024-06-11 03:55:55.622971] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.429 [2024-06-11 03:55:55.622985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.429 qpair failed and we were unable to recover it. 00:59:14.429 [2024-06-11 03:55:55.632927] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.429 [2024-06-11 03:55:55.632992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.429 [2024-06-11 03:55:55.633006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.429 [2024-06-11 03:55:55.633019] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.429 [2024-06-11 03:55:55.633025] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.429 [2024-06-11 03:55:55.633040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.429 qpair failed and we were unable to recover it. 00:59:14.429 [2024-06-11 03:55:55.642839] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.429 [2024-06-11 03:55:55.642902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.429 [2024-06-11 03:55:55.642917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.429 [2024-06-11 03:55:55.642924] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.429 [2024-06-11 03:55:55.642930] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.429 [2024-06-11 03:55:55.642944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.429 qpair failed and we were unable to recover it. 00:59:14.429 [2024-06-11 03:55:55.653005] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.429 [2024-06-11 03:55:55.653109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.429 [2024-06-11 03:55:55.653124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.429 [2024-06-11 03:55:55.653131] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.429 [2024-06-11 03:55:55.653137] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.429 [2024-06-11 03:55:55.653152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.429 qpair failed and we were unable to recover it. 00:59:14.429 [2024-06-11 03:55:55.662920] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.429 [2024-06-11 03:55:55.663008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.429 [2024-06-11 03:55:55.663028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.429 [2024-06-11 03:55:55.663034] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.429 [2024-06-11 03:55:55.663040] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.429 [2024-06-11 03:55:55.663055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.429 qpair failed and we were unable to recover it. 00:59:14.429 [2024-06-11 03:55:55.672969] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.429 [2024-06-11 03:55:55.673036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.429 [2024-06-11 03:55:55.673051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.429 [2024-06-11 03:55:55.673057] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.429 [2024-06-11 03:55:55.673063] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.429 [2024-06-11 03:55:55.673077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.429 qpair failed and we were unable to recover it. 00:59:14.429 [2024-06-11 03:55:55.682975] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.429 [2024-06-11 03:55:55.683036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.429 [2024-06-11 03:55:55.683051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.429 [2024-06-11 03:55:55.683057] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.429 [2024-06-11 03:55:55.683063] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.429 [2024-06-11 03:55:55.683078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.429 qpair failed and we were unable to recover it. 00:59:14.429 [2024-06-11 03:55:55.693076] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.429 [2024-06-11 03:55:55.693136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.429 [2024-06-11 03:55:55.693150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.429 [2024-06-11 03:55:55.693156] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.429 [2024-06-11 03:55:55.693162] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.429 [2024-06-11 03:55:55.693177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.429 qpair failed and we were unable to recover it. 00:59:14.429 [2024-06-11 03:55:55.703094] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.429 [2024-06-11 03:55:55.703156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.430 [2024-06-11 03:55:55.703173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.430 [2024-06-11 03:55:55.703180] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.430 [2024-06-11 03:55:55.703186] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.430 [2024-06-11 03:55:55.703201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.430 qpair failed and we were unable to recover it. 00:59:14.430 [2024-06-11 03:55:55.713120] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.430 [2024-06-11 03:55:55.713180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.430 [2024-06-11 03:55:55.713194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.430 [2024-06-11 03:55:55.713200] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.430 [2024-06-11 03:55:55.713206] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.430 [2024-06-11 03:55:55.713220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.430 qpair failed and we were unable to recover it. 00:59:14.430 [2024-06-11 03:55:55.723197] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.430 [2024-06-11 03:55:55.723278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.430 [2024-06-11 03:55:55.723293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.430 [2024-06-11 03:55:55.723299] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.430 [2024-06-11 03:55:55.723305] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.430 [2024-06-11 03:55:55.723320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.430 qpair failed and we were unable to recover it. 00:59:14.430 [2024-06-11 03:55:55.733147] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.430 [2024-06-11 03:55:55.733246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.430 [2024-06-11 03:55:55.733261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.430 [2024-06-11 03:55:55.733268] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.430 [2024-06-11 03:55:55.733275] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.430 [2024-06-11 03:55:55.733290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.430 qpair failed and we were unable to recover it. 00:59:14.430 [2024-06-11 03:55:55.743154] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.430 [2024-06-11 03:55:55.743223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.430 [2024-06-11 03:55:55.743238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.430 [2024-06-11 03:55:55.743245] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.430 [2024-06-11 03:55:55.743253] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.430 [2024-06-11 03:55:55.743271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.430 qpair failed and we were unable to recover it. 00:59:14.430 [2024-06-11 03:55:55.753302] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.430 [2024-06-11 03:55:55.753369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.430 [2024-06-11 03:55:55.753384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.430 [2024-06-11 03:55:55.753391] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.430 [2024-06-11 03:55:55.753398] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.430 [2024-06-11 03:55:55.753413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.430 qpair failed and we were unable to recover it. 00:59:14.430 [2024-06-11 03:55:55.763215] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.430 [2024-06-11 03:55:55.763275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.430 [2024-06-11 03:55:55.763290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.430 [2024-06-11 03:55:55.763297] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.430 [2024-06-11 03:55:55.763304] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.430 [2024-06-11 03:55:55.763319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.430 qpair failed and we were unable to recover it. 00:59:14.430 [2024-06-11 03:55:55.773320] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.430 [2024-06-11 03:55:55.773377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.430 [2024-06-11 03:55:55.773391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.430 [2024-06-11 03:55:55.773397] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.430 [2024-06-11 03:55:55.773403] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.430 [2024-06-11 03:55:55.773418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.430 qpair failed and we were unable to recover it. 00:59:14.430 [2024-06-11 03:55:55.783388] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.430 [2024-06-11 03:55:55.783497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.430 [2024-06-11 03:55:55.783511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.430 [2024-06-11 03:55:55.783517] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.430 [2024-06-11 03:55:55.783524] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.430 [2024-06-11 03:55:55.783539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.430 qpair failed and we were unable to recover it. 00:59:14.430 [2024-06-11 03:55:55.793293] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.430 [2024-06-11 03:55:55.793386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.430 [2024-06-11 03:55:55.793405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.430 [2024-06-11 03:55:55.793412] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.430 [2024-06-11 03:55:55.793418] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.430 [2024-06-11 03:55:55.793433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.430 qpair failed and we were unable to recover it. 00:59:14.430 [2024-06-11 03:55:55.803365] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.430 [2024-06-11 03:55:55.803448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.430 [2024-06-11 03:55:55.803464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.430 [2024-06-11 03:55:55.803471] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.430 [2024-06-11 03:55:55.803477] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.430 [2024-06-11 03:55:55.803492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.430 qpair failed and we were unable to recover it. 00:59:14.430 [2024-06-11 03:55:55.813356] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.430 [2024-06-11 03:55:55.813441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.430 [2024-06-11 03:55:55.813457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.430 [2024-06-11 03:55:55.813463] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.430 [2024-06-11 03:55:55.813470] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.430 [2024-06-11 03:55:55.813485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.430 qpair failed and we were unable to recover it. 00:59:14.430 [2024-06-11 03:55:55.823387] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.430 [2024-06-11 03:55:55.823451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.430 [2024-06-11 03:55:55.823466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.430 [2024-06-11 03:55:55.823473] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.430 [2024-06-11 03:55:55.823479] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.430 [2024-06-11 03:55:55.823493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.430 qpair failed and we were unable to recover it. 00:59:14.689 [2024-06-11 03:55:55.833412] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.690 [2024-06-11 03:55:55.833479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.690 [2024-06-11 03:55:55.833493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.690 [2024-06-11 03:55:55.833500] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.690 [2024-06-11 03:55:55.833509] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.690 [2024-06-11 03:55:55.833523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.690 qpair failed and we were unable to recover it. 00:59:14.690 [2024-06-11 03:55:55.843564] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.690 [2024-06-11 03:55:55.843675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.690 [2024-06-11 03:55:55.843691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.690 [2024-06-11 03:55:55.843697] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.690 [2024-06-11 03:55:55.843703] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.690 [2024-06-11 03:55:55.843717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.690 qpair failed and we were unable to recover it. 00:59:14.690 [2024-06-11 03:55:55.853506] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.690 [2024-06-11 03:55:55.853576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.690 [2024-06-11 03:55:55.853592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.690 [2024-06-11 03:55:55.853598] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.690 [2024-06-11 03:55:55.853604] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.690 [2024-06-11 03:55:55.853619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.690 qpair failed and we were unable to recover it. 00:59:14.690 [2024-06-11 03:55:55.863564] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.690 [2024-06-11 03:55:55.863626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.690 [2024-06-11 03:55:55.863640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.690 [2024-06-11 03:55:55.863647] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.690 [2024-06-11 03:55:55.863653] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.690 [2024-06-11 03:55:55.863667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.690 qpair failed and we were unable to recover it. 00:59:14.690 [2024-06-11 03:55:55.873531] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.690 [2024-06-11 03:55:55.873591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.690 [2024-06-11 03:55:55.873605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.690 [2024-06-11 03:55:55.873611] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.690 [2024-06-11 03:55:55.873617] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.690 [2024-06-11 03:55:55.873631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.690 qpair failed and we were unable to recover it. 00:59:14.690 [2024-06-11 03:55:55.883599] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.690 [2024-06-11 03:55:55.883672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.690 [2024-06-11 03:55:55.883686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.690 [2024-06-11 03:55:55.883693] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.690 [2024-06-11 03:55:55.883699] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.690 [2024-06-11 03:55:55.883717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.690 qpair failed and we were unable to recover it. 00:59:14.690 [2024-06-11 03:55:55.893669] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.690 [2024-06-11 03:55:55.893765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.690 [2024-06-11 03:55:55.893781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.690 [2024-06-11 03:55:55.893788] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.690 [2024-06-11 03:55:55.893794] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.690 [2024-06-11 03:55:55.893808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.690 qpair failed and we were unable to recover it. 00:59:14.690 [2024-06-11 03:55:55.903614] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.690 [2024-06-11 03:55:55.903676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.690 [2024-06-11 03:55:55.903690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.690 [2024-06-11 03:55:55.903697] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.690 [2024-06-11 03:55:55.903703] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.690 [2024-06-11 03:55:55.903717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.690 qpair failed and we were unable to recover it. 00:59:14.690 [2024-06-11 03:55:55.913671] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.690 [2024-06-11 03:55:55.913733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.690 [2024-06-11 03:55:55.913747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.690 [2024-06-11 03:55:55.913753] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.690 [2024-06-11 03:55:55.913759] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.690 [2024-06-11 03:55:55.913773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.690 qpair failed and we were unable to recover it. 00:59:14.690 [2024-06-11 03:55:55.923769] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.690 [2024-06-11 03:55:55.923838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.690 [2024-06-11 03:55:55.923856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.690 [2024-06-11 03:55:55.923865] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.690 [2024-06-11 03:55:55.923871] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.690 [2024-06-11 03:55:55.923885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.690 qpair failed and we were unable to recover it. 00:59:14.690 [2024-06-11 03:55:55.933749] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.690 [2024-06-11 03:55:55.933807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.690 [2024-06-11 03:55:55.933822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.690 [2024-06-11 03:55:55.933828] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.690 [2024-06-11 03:55:55.933834] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.690 [2024-06-11 03:55:55.933848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.690 qpair failed and we were unable to recover it. 00:59:14.690 [2024-06-11 03:55:55.943802] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.690 [2024-06-11 03:55:55.943878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.690 [2024-06-11 03:55:55.943893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.690 [2024-06-11 03:55:55.943900] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.690 [2024-06-11 03:55:55.943906] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.690 [2024-06-11 03:55:55.943920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.690 qpair failed and we were unable to recover it. 00:59:14.690 [2024-06-11 03:55:55.953854] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.690 [2024-06-11 03:55:55.953915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.690 [2024-06-11 03:55:55.953929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.690 [2024-06-11 03:55:55.953935] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.690 [2024-06-11 03:55:55.953941] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.690 [2024-06-11 03:55:55.953956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.690 qpair failed and we were unable to recover it. 00:59:14.690 [2024-06-11 03:55:55.963810] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.690 [2024-06-11 03:55:55.963868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.691 [2024-06-11 03:55:55.963883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.691 [2024-06-11 03:55:55.963889] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.691 [2024-06-11 03:55:55.963895] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.691 [2024-06-11 03:55:55.963909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.691 qpair failed and we were unable to recover it. 00:59:14.691 [2024-06-11 03:55:55.973816] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.691 [2024-06-11 03:55:55.973881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.691 [2024-06-11 03:55:55.973896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.691 [2024-06-11 03:55:55.973902] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.691 [2024-06-11 03:55:55.973909] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.691 [2024-06-11 03:55:55.973923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.691 qpair failed and we were unable to recover it. 00:59:14.691 [2024-06-11 03:55:55.983989] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.691 [2024-06-11 03:55:55.984057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.691 [2024-06-11 03:55:55.984072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.691 [2024-06-11 03:55:55.984079] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.691 [2024-06-11 03:55:55.984085] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.691 [2024-06-11 03:55:55.984100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.691 qpair failed and we were unable to recover it. 00:59:14.691 [2024-06-11 03:55:55.993856] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.691 [2024-06-11 03:55:55.993918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.691 [2024-06-11 03:55:55.993933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.691 [2024-06-11 03:55:55.993940] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.691 [2024-06-11 03:55:55.993947] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.691 [2024-06-11 03:55:55.993960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.691 qpair failed and we were unable to recover it. 00:59:14.691 [2024-06-11 03:55:56.003904] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.691 [2024-06-11 03:55:56.003966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.691 [2024-06-11 03:55:56.003980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.691 [2024-06-11 03:55:56.003987] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.691 [2024-06-11 03:55:56.003993] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.691 [2024-06-11 03:55:56.004007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.691 qpair failed and we were unable to recover it. 00:59:14.691 [2024-06-11 03:55:56.014049] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.691 [2024-06-11 03:55:56.014108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.691 [2024-06-11 03:55:56.014123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.691 [2024-06-11 03:55:56.014135] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.691 [2024-06-11 03:55:56.014141] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.691 [2024-06-11 03:55:56.014155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.691 qpair failed and we were unable to recover it. 00:59:14.691 [2024-06-11 03:55:56.023991] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.691 [2024-06-11 03:55:56.024058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.691 [2024-06-11 03:55:56.024074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.691 [2024-06-11 03:55:56.024080] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.691 [2024-06-11 03:55:56.024087] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.691 [2024-06-11 03:55:56.024101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.691 qpair failed and we were unable to recover it. 00:59:14.691 [2024-06-11 03:55:56.034046] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.691 [2024-06-11 03:55:56.034108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.691 [2024-06-11 03:55:56.034121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.691 [2024-06-11 03:55:56.034128] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.691 [2024-06-11 03:55:56.034134] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.691 [2024-06-11 03:55:56.034148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.691 qpair failed and we were unable to recover it. 00:59:14.691 [2024-06-11 03:55:56.044007] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.691 [2024-06-11 03:55:56.044075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.691 [2024-06-11 03:55:56.044088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.691 [2024-06-11 03:55:56.044095] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.691 [2024-06-11 03:55:56.044101] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.691 [2024-06-11 03:55:56.044114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.691 qpair failed and we were unable to recover it. 00:59:14.691 [2024-06-11 03:55:56.054048] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.691 [2024-06-11 03:55:56.054109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.691 [2024-06-11 03:55:56.054124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.691 [2024-06-11 03:55:56.054130] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.691 [2024-06-11 03:55:56.054136] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.691 [2024-06-11 03:55:56.054150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.691 qpair failed and we were unable to recover it. 00:59:14.691 [2024-06-11 03:55:56.064177] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.691 [2024-06-11 03:55:56.064239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.691 [2024-06-11 03:55:56.064254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.691 [2024-06-11 03:55:56.064260] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.691 [2024-06-11 03:55:56.064266] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.691 [2024-06-11 03:55:56.064281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.691 qpair failed and we were unable to recover it. 00:59:14.691 [2024-06-11 03:55:56.074196] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.691 [2024-06-11 03:55:56.074301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.691 [2024-06-11 03:55:56.074316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.691 [2024-06-11 03:55:56.074323] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.691 [2024-06-11 03:55:56.074329] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.691 [2024-06-11 03:55:56.074344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.691 qpair failed and we were unable to recover it. 00:59:14.691 [2024-06-11 03:55:56.084191] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.691 [2024-06-11 03:55:56.084254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.691 [2024-06-11 03:55:56.084268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.691 [2024-06-11 03:55:56.084275] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.691 [2024-06-11 03:55:56.084282] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.691 [2024-06-11 03:55:56.084296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.691 qpair failed and we were unable to recover it. 00:59:14.950 [2024-06-11 03:55:56.094224] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.950 [2024-06-11 03:55:56.094291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.950 [2024-06-11 03:55:56.094305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.950 [2024-06-11 03:55:56.094312] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.950 [2024-06-11 03:55:56.094318] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.950 [2024-06-11 03:55:56.094332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.950 qpair failed and we were unable to recover it. 00:59:14.950 [2024-06-11 03:55:56.104244] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.950 [2024-06-11 03:55:56.104308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.950 [2024-06-11 03:55:56.104325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.950 [2024-06-11 03:55:56.104332] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.950 [2024-06-11 03:55:56.104338] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.950 [2024-06-11 03:55:56.104351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.950 qpair failed and we were unable to recover it. 00:59:14.950 [2024-06-11 03:55:56.114308] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.950 [2024-06-11 03:55:56.114417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.950 [2024-06-11 03:55:56.114431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.950 [2024-06-11 03:55:56.114438] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.950 [2024-06-11 03:55:56.114444] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.950 [2024-06-11 03:55:56.114459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.950 qpair failed and we were unable to recover it. 00:59:14.950 [2024-06-11 03:55:56.124321] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.950 [2024-06-11 03:55:56.124383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.950 [2024-06-11 03:55:56.124398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.950 [2024-06-11 03:55:56.124405] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.950 [2024-06-11 03:55:56.124411] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.950 [2024-06-11 03:55:56.124426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.950 qpair failed and we were unable to recover it. 00:59:14.950 [2024-06-11 03:55:56.134371] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.951 [2024-06-11 03:55:56.134441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.951 [2024-06-11 03:55:56.134459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.951 [2024-06-11 03:55:56.134465] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.951 [2024-06-11 03:55:56.134472] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.951 [2024-06-11 03:55:56.134486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.951 qpair failed and we were unable to recover it. 00:59:14.951 [2024-06-11 03:55:56.144354] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.951 [2024-06-11 03:55:56.144459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.951 [2024-06-11 03:55:56.144474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.951 [2024-06-11 03:55:56.144481] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.951 [2024-06-11 03:55:56.144487] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.951 [2024-06-11 03:55:56.144505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.951 qpair failed and we were unable to recover it. 00:59:14.951 [2024-06-11 03:55:56.154429] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.951 [2024-06-11 03:55:56.154496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.951 [2024-06-11 03:55:56.154511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.951 [2024-06-11 03:55:56.154518] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.951 [2024-06-11 03:55:56.154524] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.951 [2024-06-11 03:55:56.154538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.951 qpair failed and we were unable to recover it. 00:59:14.951 [2024-06-11 03:55:56.164439] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.951 [2024-06-11 03:55:56.164501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.951 [2024-06-11 03:55:56.164515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.951 [2024-06-11 03:55:56.164521] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.951 [2024-06-11 03:55:56.164528] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.951 [2024-06-11 03:55:56.164541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.951 qpair failed and we were unable to recover it. 00:59:14.951 [2024-06-11 03:55:56.174490] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.951 [2024-06-11 03:55:56.174561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.951 [2024-06-11 03:55:56.174591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.951 [2024-06-11 03:55:56.174597] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.951 [2024-06-11 03:55:56.174603] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.951 [2024-06-11 03:55:56.174618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.951 qpair failed and we were unable to recover it. 00:59:14.951 [2024-06-11 03:55:56.184510] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.951 [2024-06-11 03:55:56.184571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.951 [2024-06-11 03:55:56.184585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.951 [2024-06-11 03:55:56.184592] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.951 [2024-06-11 03:55:56.184598] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.951 [2024-06-11 03:55:56.184611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.951 qpair failed and we were unable to recover it. 00:59:14.951 [2024-06-11 03:55:56.194520] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.951 [2024-06-11 03:55:56.194597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.951 [2024-06-11 03:55:56.194615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.951 [2024-06-11 03:55:56.194622] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.951 [2024-06-11 03:55:56.194627] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.951 [2024-06-11 03:55:56.194641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.951 qpair failed and we were unable to recover it. 00:59:14.951 [2024-06-11 03:55:56.204471] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.951 [2024-06-11 03:55:56.204527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.951 [2024-06-11 03:55:56.204541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.951 [2024-06-11 03:55:56.204548] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.951 [2024-06-11 03:55:56.204554] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.951 [2024-06-11 03:55:56.204567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.951 qpair failed and we were unable to recover it. 00:59:14.951 [2024-06-11 03:55:56.214555] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.951 [2024-06-11 03:55:56.214615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.951 [2024-06-11 03:55:56.214629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.951 [2024-06-11 03:55:56.214635] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.951 [2024-06-11 03:55:56.214641] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.951 [2024-06-11 03:55:56.214655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.951 qpair failed and we were unable to recover it. 00:59:14.951 [2024-06-11 03:55:56.224599] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.951 [2024-06-11 03:55:56.224659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.951 [2024-06-11 03:55:56.224674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.951 [2024-06-11 03:55:56.224680] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.951 [2024-06-11 03:55:56.224686] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.951 [2024-06-11 03:55:56.224700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.951 qpair failed and we were unable to recover it. 00:59:14.951 [2024-06-11 03:55:56.234619] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.951 [2024-06-11 03:55:56.234675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.951 [2024-06-11 03:55:56.234689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.951 [2024-06-11 03:55:56.234696] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.951 [2024-06-11 03:55:56.234704] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.951 [2024-06-11 03:55:56.234719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.951 qpair failed and we were unable to recover it. 00:59:14.951 [2024-06-11 03:55:56.244628] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.951 [2024-06-11 03:55:56.244686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.951 [2024-06-11 03:55:56.244701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.951 [2024-06-11 03:55:56.244707] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.951 [2024-06-11 03:55:56.244713] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.951 [2024-06-11 03:55:56.244727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.951 qpair failed and we were unable to recover it. 00:59:14.951 [2024-06-11 03:55:56.254675] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.951 [2024-06-11 03:55:56.254758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.951 [2024-06-11 03:55:56.254774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.951 [2024-06-11 03:55:56.254781] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.951 [2024-06-11 03:55:56.254787] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.951 [2024-06-11 03:55:56.254801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.951 qpair failed and we were unable to recover it. 00:59:14.951 [2024-06-11 03:55:56.264679] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.951 [2024-06-11 03:55:56.264775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.951 [2024-06-11 03:55:56.264790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.951 [2024-06-11 03:55:56.264797] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.952 [2024-06-11 03:55:56.264803] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.952 [2024-06-11 03:55:56.264817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.952 qpair failed and we were unable to recover it. 00:59:14.952 [2024-06-11 03:55:56.274767] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.952 [2024-06-11 03:55:56.274828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.952 [2024-06-11 03:55:56.274842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.952 [2024-06-11 03:55:56.274849] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.952 [2024-06-11 03:55:56.274855] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.952 [2024-06-11 03:55:56.274869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.952 qpair failed and we were unable to recover it. 00:59:14.952 [2024-06-11 03:55:56.284767] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.952 [2024-06-11 03:55:56.284828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.952 [2024-06-11 03:55:56.284842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.952 [2024-06-11 03:55:56.284849] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.952 [2024-06-11 03:55:56.284855] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.952 [2024-06-11 03:55:56.284869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.952 qpair failed and we were unable to recover it. 00:59:14.952 [2024-06-11 03:55:56.294842] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.952 [2024-06-11 03:55:56.294898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.952 [2024-06-11 03:55:56.294913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.952 [2024-06-11 03:55:56.294919] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.952 [2024-06-11 03:55:56.294925] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.952 [2024-06-11 03:55:56.294939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.952 qpair failed and we were unable to recover it. 00:59:14.952 [2024-06-11 03:55:56.304873] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.952 [2024-06-11 03:55:56.304934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.952 [2024-06-11 03:55:56.304948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.952 [2024-06-11 03:55:56.304954] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.952 [2024-06-11 03:55:56.304960] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.952 [2024-06-11 03:55:56.304974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.952 qpair failed and we were unable to recover it. 00:59:14.952 [2024-06-11 03:55:56.314856] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.952 [2024-06-11 03:55:56.314916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.952 [2024-06-11 03:55:56.314930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.952 [2024-06-11 03:55:56.314936] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.952 [2024-06-11 03:55:56.314942] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.952 [2024-06-11 03:55:56.314956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.952 qpair failed and we were unable to recover it. 00:59:14.952 [2024-06-11 03:55:56.324886] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.952 [2024-06-11 03:55:56.324949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.952 [2024-06-11 03:55:56.324963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.952 [2024-06-11 03:55:56.324971] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.952 [2024-06-11 03:55:56.324980] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.952 [2024-06-11 03:55:56.324995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.952 qpair failed and we were unable to recover it. 00:59:14.952 [2024-06-11 03:55:56.334938] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.952 [2024-06-11 03:55:56.334996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.952 [2024-06-11 03:55:56.335014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.952 [2024-06-11 03:55:56.335021] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.952 [2024-06-11 03:55:56.335027] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.952 [2024-06-11 03:55:56.335042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.952 qpair failed and we were unable to recover it. 00:59:14.952 [2024-06-11 03:55:56.345006] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:14.952 [2024-06-11 03:55:56.345107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:14.952 [2024-06-11 03:55:56.345123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:14.952 [2024-06-11 03:55:56.345129] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:14.952 [2024-06-11 03:55:56.345135] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:14.952 [2024-06-11 03:55:56.345150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:14.952 qpair failed and we were unable to recover it. 00:59:15.211 [2024-06-11 03:55:56.354980] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.211 [2024-06-11 03:55:56.355051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.211 [2024-06-11 03:55:56.355066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.211 [2024-06-11 03:55:56.355073] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.211 [2024-06-11 03:55:56.355079] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.211 [2024-06-11 03:55:56.355093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.211 qpair failed and we were unable to recover it. 00:59:15.211 [2024-06-11 03:55:56.365029] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.211 [2024-06-11 03:55:56.365094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.211 [2024-06-11 03:55:56.365108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.211 [2024-06-11 03:55:56.365114] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.211 [2024-06-11 03:55:56.365120] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.211 [2024-06-11 03:55:56.365135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.211 qpair failed and we were unable to recover it. 00:59:15.211 [2024-06-11 03:55:56.375039] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.211 [2024-06-11 03:55:56.375099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.211 [2024-06-11 03:55:56.375113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.211 [2024-06-11 03:55:56.375120] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.211 [2024-06-11 03:55:56.375126] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.211 [2024-06-11 03:55:56.375141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.211 qpair failed and we were unable to recover it. 00:59:15.211 [2024-06-11 03:55:56.385078] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.211 [2024-06-11 03:55:56.385142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.211 [2024-06-11 03:55:56.385156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.211 [2024-06-11 03:55:56.385163] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.211 [2024-06-11 03:55:56.385169] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.211 [2024-06-11 03:55:56.385183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.211 qpair failed and we were unable to recover it. 00:59:15.211 [2024-06-11 03:55:56.395061] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.211 [2024-06-11 03:55:56.395126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.211 [2024-06-11 03:55:56.395140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.211 [2024-06-11 03:55:56.395147] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.211 [2024-06-11 03:55:56.395153] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.211 [2024-06-11 03:55:56.395167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.211 qpair failed and we were unable to recover it. 00:59:15.211 [2024-06-11 03:55:56.405181] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.211 [2024-06-11 03:55:56.405280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.211 [2024-06-11 03:55:56.405295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.211 [2024-06-11 03:55:56.405302] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.211 [2024-06-11 03:55:56.405308] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.211 [2024-06-11 03:55:56.405322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.211 qpair failed and we were unable to recover it. 00:59:15.211 [2024-06-11 03:55:56.415150] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.211 [2024-06-11 03:55:56.415204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.211 [2024-06-11 03:55:56.415219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.211 [2024-06-11 03:55:56.415229] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.211 [2024-06-11 03:55:56.415235] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.212 [2024-06-11 03:55:56.415249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.212 qpair failed and we were unable to recover it. 00:59:15.212 [2024-06-11 03:55:56.425209] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.212 [2024-06-11 03:55:56.425278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.212 [2024-06-11 03:55:56.425292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.212 [2024-06-11 03:55:56.425299] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.212 [2024-06-11 03:55:56.425305] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.212 [2024-06-11 03:55:56.425319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.212 qpair failed and we were unable to recover it. 00:59:15.212 [2024-06-11 03:55:56.435230] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.212 [2024-06-11 03:55:56.435294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.212 [2024-06-11 03:55:56.435308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.212 [2024-06-11 03:55:56.435315] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.212 [2024-06-11 03:55:56.435321] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.212 [2024-06-11 03:55:56.435335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.212 qpair failed and we were unable to recover it. 00:59:15.212 [2024-06-11 03:55:56.445236] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.212 [2024-06-11 03:55:56.445344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.212 [2024-06-11 03:55:56.445361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.212 [2024-06-11 03:55:56.445368] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.212 [2024-06-11 03:55:56.445374] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.212 [2024-06-11 03:55:56.445389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.212 qpair failed and we were unable to recover it. 00:59:15.212 [2024-06-11 03:55:56.455314] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.212 [2024-06-11 03:55:56.455375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.212 [2024-06-11 03:55:56.455389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.212 [2024-06-11 03:55:56.455396] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.212 [2024-06-11 03:55:56.455402] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.212 [2024-06-11 03:55:56.455416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.212 qpair failed and we were unable to recover it. 00:59:15.212 [2024-06-11 03:55:56.465357] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.212 [2024-06-11 03:55:56.465418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.212 [2024-06-11 03:55:56.465433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.212 [2024-06-11 03:55:56.465439] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.212 [2024-06-11 03:55:56.465445] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.212 [2024-06-11 03:55:56.465460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.212 qpair failed and we were unable to recover it. 00:59:15.212 [2024-06-11 03:55:56.475321] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.212 [2024-06-11 03:55:56.475380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.212 [2024-06-11 03:55:56.475395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.212 [2024-06-11 03:55:56.475401] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.212 [2024-06-11 03:55:56.475407] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.212 [2024-06-11 03:55:56.475421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.212 qpair failed and we were unable to recover it. 00:59:15.212 [2024-06-11 03:55:56.485396] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.212 [2024-06-11 03:55:56.485453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.212 [2024-06-11 03:55:56.485467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.212 [2024-06-11 03:55:56.485473] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.212 [2024-06-11 03:55:56.485479] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.212 [2024-06-11 03:55:56.485493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.212 qpair failed and we were unable to recover it. 00:59:15.212 [2024-06-11 03:55:56.495390] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.212 [2024-06-11 03:55:56.495449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.212 [2024-06-11 03:55:56.495463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.212 [2024-06-11 03:55:56.495470] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.212 [2024-06-11 03:55:56.495476] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.212 [2024-06-11 03:55:56.495490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.212 qpair failed and we were unable to recover it. 00:59:15.212 [2024-06-11 03:55:56.505423] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.212 [2024-06-11 03:55:56.505485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.212 [2024-06-11 03:55:56.505503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.212 [2024-06-11 03:55:56.505510] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.212 [2024-06-11 03:55:56.505516] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.212 [2024-06-11 03:55:56.505530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.212 qpair failed and we were unable to recover it. 00:59:15.212 [2024-06-11 03:55:56.515441] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.212 [2024-06-11 03:55:56.515497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.212 [2024-06-11 03:55:56.515512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.212 [2024-06-11 03:55:56.515519] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.212 [2024-06-11 03:55:56.515525] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.212 [2024-06-11 03:55:56.515539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.212 qpair failed and we were unable to recover it. 00:59:15.212 [2024-06-11 03:55:56.525466] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.212 [2024-06-11 03:55:56.525552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.212 [2024-06-11 03:55:56.525568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.212 [2024-06-11 03:55:56.525574] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.212 [2024-06-11 03:55:56.525581] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.212 [2024-06-11 03:55:56.525595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.212 qpair failed and we were unable to recover it. 00:59:15.212 [2024-06-11 03:55:56.535531] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.212 [2024-06-11 03:55:56.535591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.212 [2024-06-11 03:55:56.535605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.212 [2024-06-11 03:55:56.535612] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.212 [2024-06-11 03:55:56.535618] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.212 [2024-06-11 03:55:56.535632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.212 qpair failed and we were unable to recover it. 00:59:15.212 [2024-06-11 03:55:56.545578] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.212 [2024-06-11 03:55:56.545639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.212 [2024-06-11 03:55:56.545654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.212 [2024-06-11 03:55:56.545660] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.212 [2024-06-11 03:55:56.545666] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.212 [2024-06-11 03:55:56.545683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.212 qpair failed and we were unable to recover it. 00:59:15.212 [2024-06-11 03:55:56.555522] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.213 [2024-06-11 03:55:56.555615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.213 [2024-06-11 03:55:56.555631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.213 [2024-06-11 03:55:56.555637] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.213 [2024-06-11 03:55:56.555644] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.213 [2024-06-11 03:55:56.555658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.213 qpair failed and we were unable to recover it. 00:59:15.213 [2024-06-11 03:55:56.565545] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.213 [2024-06-11 03:55:56.565600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.213 [2024-06-11 03:55:56.565614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.213 [2024-06-11 03:55:56.565621] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.213 [2024-06-11 03:55:56.565627] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.213 [2024-06-11 03:55:56.565641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.213 qpair failed and we were unable to recover it. 00:59:15.213 [2024-06-11 03:55:56.575617] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.213 [2024-06-11 03:55:56.575722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.213 [2024-06-11 03:55:56.575737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.213 [2024-06-11 03:55:56.575744] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.213 [2024-06-11 03:55:56.575750] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.213 [2024-06-11 03:55:56.575764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.213 qpair failed and we were unable to recover it. 00:59:15.213 [2024-06-11 03:55:56.585663] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.213 [2024-06-11 03:55:56.585757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.213 [2024-06-11 03:55:56.585772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.213 [2024-06-11 03:55:56.585779] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.213 [2024-06-11 03:55:56.585785] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.213 [2024-06-11 03:55:56.585799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.213 qpair failed and we were unable to recover it. 00:59:15.213 [2024-06-11 03:55:56.595689] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.213 [2024-06-11 03:55:56.595760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.213 [2024-06-11 03:55:56.595778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.213 [2024-06-11 03:55:56.595784] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.213 [2024-06-11 03:55:56.595790] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.213 [2024-06-11 03:55:56.595804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.213 qpair failed and we were unable to recover it. 00:59:15.213 [2024-06-11 03:55:56.605727] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.213 [2024-06-11 03:55:56.605784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.213 [2024-06-11 03:55:56.605798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.213 [2024-06-11 03:55:56.605805] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.213 [2024-06-11 03:55:56.605811] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.213 [2024-06-11 03:55:56.605826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.213 qpair failed and we were unable to recover it. 00:59:15.472 [2024-06-11 03:55:56.615784] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.472 [2024-06-11 03:55:56.615892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.472 [2024-06-11 03:55:56.615908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.472 [2024-06-11 03:55:56.615914] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.472 [2024-06-11 03:55:56.615921] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.472 [2024-06-11 03:55:56.615935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.472 qpair failed and we were unable to recover it. 00:59:15.472 [2024-06-11 03:55:56.625774] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.472 [2024-06-11 03:55:56.625840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.472 [2024-06-11 03:55:56.625855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.472 [2024-06-11 03:55:56.625862] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.472 [2024-06-11 03:55:56.625868] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.472 [2024-06-11 03:55:56.625882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.472 qpair failed and we were unable to recover it. 00:59:15.472 [2024-06-11 03:55:56.635844] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.472 [2024-06-11 03:55:56.635951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.472 [2024-06-11 03:55:56.635967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.472 [2024-06-11 03:55:56.635974] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.472 [2024-06-11 03:55:56.635983] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.472 [2024-06-11 03:55:56.635997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.472 qpair failed and we were unable to recover it. 00:59:15.472 [2024-06-11 03:55:56.645839] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.472 [2024-06-11 03:55:56.645913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.472 [2024-06-11 03:55:56.645929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.472 [2024-06-11 03:55:56.645935] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.472 [2024-06-11 03:55:56.645941] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.472 [2024-06-11 03:55:56.645955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.472 qpair failed and we were unable to recover it. 00:59:15.472 [2024-06-11 03:55:56.655782] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.472 [2024-06-11 03:55:56.655889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.472 [2024-06-11 03:55:56.655904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.472 [2024-06-11 03:55:56.655911] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.472 [2024-06-11 03:55:56.655917] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.472 [2024-06-11 03:55:56.655932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.472 qpair failed and we were unable to recover it. 00:59:15.472 [2024-06-11 03:55:56.665875] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.472 [2024-06-11 03:55:56.665936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.472 [2024-06-11 03:55:56.665950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.472 [2024-06-11 03:55:56.665957] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.472 [2024-06-11 03:55:56.665963] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.472 [2024-06-11 03:55:56.665978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.472 qpair failed and we were unable to recover it. 00:59:15.472 [2024-06-11 03:55:56.675908] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.472 [2024-06-11 03:55:56.675969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.472 [2024-06-11 03:55:56.675984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.472 [2024-06-11 03:55:56.675990] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.472 [2024-06-11 03:55:56.675996] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.472 [2024-06-11 03:55:56.676015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.472 qpair failed and we were unable to recover it. 00:59:15.472 [2024-06-11 03:55:56.685924] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.472 [2024-06-11 03:55:56.685988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.472 [2024-06-11 03:55:56.686003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.472 [2024-06-11 03:55:56.686013] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.472 [2024-06-11 03:55:56.686019] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.472 [2024-06-11 03:55:56.686033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.472 qpair failed and we were unable to recover it. 00:59:15.472 [2024-06-11 03:55:56.695967] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.472 [2024-06-11 03:55:56.696051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.472 [2024-06-11 03:55:56.696066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.472 [2024-06-11 03:55:56.696073] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.472 [2024-06-11 03:55:56.696079] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.472 [2024-06-11 03:55:56.696094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.472 qpair failed and we were unable to recover it. 00:59:15.472 [2024-06-11 03:55:56.705983] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.472 [2024-06-11 03:55:56.706049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.472 [2024-06-11 03:55:56.706064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.472 [2024-06-11 03:55:56.706070] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.472 [2024-06-11 03:55:56.706076] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.472 [2024-06-11 03:55:56.706091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.472 qpair failed and we were unable to recover it. 00:59:15.472 [2024-06-11 03:55:56.716030] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.472 [2024-06-11 03:55:56.716090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.472 [2024-06-11 03:55:56.716104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.472 [2024-06-11 03:55:56.716111] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.472 [2024-06-11 03:55:56.716117] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.472 [2024-06-11 03:55:56.716131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.472 qpair failed and we were unable to recover it. 00:59:15.472 [2024-06-11 03:55:56.726037] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.472 [2024-06-11 03:55:56.726099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.472 [2024-06-11 03:55:56.726114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.472 [2024-06-11 03:55:56.726120] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.472 [2024-06-11 03:55:56.726129] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.472 [2024-06-11 03:55:56.726144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.472 qpair failed and we were unable to recover it. 00:59:15.472 [2024-06-11 03:55:56.736104] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.472 [2024-06-11 03:55:56.736209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.472 [2024-06-11 03:55:56.736224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.472 [2024-06-11 03:55:56.736231] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.472 [2024-06-11 03:55:56.736237] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.472 [2024-06-11 03:55:56.736251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.472 qpair failed and we were unable to recover it. 00:59:15.473 [2024-06-11 03:55:56.746022] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.473 [2024-06-11 03:55:56.746081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.473 [2024-06-11 03:55:56.746097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.473 [2024-06-11 03:55:56.746103] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.473 [2024-06-11 03:55:56.746109] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.473 [2024-06-11 03:55:56.746125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.473 qpair failed and we were unable to recover it. 00:59:15.473 [2024-06-11 03:55:56.756107] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.473 [2024-06-11 03:55:56.756168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.473 [2024-06-11 03:55:56.756182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.473 [2024-06-11 03:55:56.756189] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.473 [2024-06-11 03:55:56.756195] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.473 [2024-06-11 03:55:56.756209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.473 qpair failed and we were unable to recover it. 00:59:15.473 [2024-06-11 03:55:56.766138] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.473 [2024-06-11 03:55:56.766201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.473 [2024-06-11 03:55:56.766216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.473 [2024-06-11 03:55:56.766223] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.473 [2024-06-11 03:55:56.766229] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.473 [2024-06-11 03:55:56.766243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.473 qpair failed and we were unable to recover it. 00:59:15.473 [2024-06-11 03:55:56.776218] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.473 [2024-06-11 03:55:56.776311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.473 [2024-06-11 03:55:56.776327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.473 [2024-06-11 03:55:56.776334] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.473 [2024-06-11 03:55:56.776340] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.473 [2024-06-11 03:55:56.776354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.473 qpair failed and we were unable to recover it. 00:59:15.473 [2024-06-11 03:55:56.786197] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.473 [2024-06-11 03:55:56.786257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.473 [2024-06-11 03:55:56.786271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.473 [2024-06-11 03:55:56.786278] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.473 [2024-06-11 03:55:56.786284] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.473 [2024-06-11 03:55:56.786298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.473 qpair failed and we were unable to recover it. 00:59:15.473 [2024-06-11 03:55:56.796233] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.473 [2024-06-11 03:55:56.796290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.473 [2024-06-11 03:55:56.796305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.473 [2024-06-11 03:55:56.796311] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.473 [2024-06-11 03:55:56.796317] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.473 [2024-06-11 03:55:56.796332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.473 qpair failed and we were unable to recover it. 00:59:15.473 [2024-06-11 03:55:56.806292] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.473 [2024-06-11 03:55:56.806355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.473 [2024-06-11 03:55:56.806371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.473 [2024-06-11 03:55:56.806377] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.473 [2024-06-11 03:55:56.806383] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.473 [2024-06-11 03:55:56.806398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.473 qpair failed and we were unable to recover it. 00:59:15.473 [2024-06-11 03:55:56.816211] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.473 [2024-06-11 03:55:56.816268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.473 [2024-06-11 03:55:56.816283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.473 [2024-06-11 03:55:56.816294] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.473 [2024-06-11 03:55:56.816301] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.473 [2024-06-11 03:55:56.816316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.473 qpair failed and we were unable to recover it. 00:59:15.473 [2024-06-11 03:55:56.826304] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.473 [2024-06-11 03:55:56.826365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.473 [2024-06-11 03:55:56.826380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.473 [2024-06-11 03:55:56.826386] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.473 [2024-06-11 03:55:56.826393] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.473 [2024-06-11 03:55:56.826406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.473 qpair failed and we were unable to recover it. 00:59:15.473 [2024-06-11 03:55:56.836354] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.473 [2024-06-11 03:55:56.836413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.473 [2024-06-11 03:55:56.836428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.473 [2024-06-11 03:55:56.836434] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.473 [2024-06-11 03:55:56.836441] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.473 [2024-06-11 03:55:56.836455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.473 qpair failed and we were unable to recover it. 00:59:15.473 [2024-06-11 03:55:56.846353] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.473 [2024-06-11 03:55:56.846411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.473 [2024-06-11 03:55:56.846426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.473 [2024-06-11 03:55:56.846432] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.473 [2024-06-11 03:55:56.846438] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.473 [2024-06-11 03:55:56.846453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.473 qpair failed and we were unable to recover it. 00:59:15.473 [2024-06-11 03:55:56.856369] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.473 [2024-06-11 03:55:56.856426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.473 [2024-06-11 03:55:56.856440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.473 [2024-06-11 03:55:56.856447] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.473 [2024-06-11 03:55:56.856453] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.473 [2024-06-11 03:55:56.856467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.473 qpair failed and we were unable to recover it. 00:59:15.473 [2024-06-11 03:55:56.866431] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.473 [2024-06-11 03:55:56.866498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.473 [2024-06-11 03:55:56.866513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.473 [2024-06-11 03:55:56.866519] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.473 [2024-06-11 03:55:56.866525] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.473 [2024-06-11 03:55:56.866539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.473 qpair failed and we were unable to recover it. 00:59:15.731 [2024-06-11 03:55:56.876409] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.731 [2024-06-11 03:55:56.876486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.731 [2024-06-11 03:55:56.876503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.731 [2024-06-11 03:55:56.876510] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.731 [2024-06-11 03:55:56.876516] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.732 [2024-06-11 03:55:56.876530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.732 qpair failed and we were unable to recover it. 00:59:15.732 [2024-06-11 03:55:56.886506] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.732 [2024-06-11 03:55:56.886573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.732 [2024-06-11 03:55:56.886588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.732 [2024-06-11 03:55:56.886594] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.732 [2024-06-11 03:55:56.886600] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.732 [2024-06-11 03:55:56.886614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.732 qpair failed and we were unable to recover it. 00:59:15.732 [2024-06-11 03:55:56.896480] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.732 [2024-06-11 03:55:56.896540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.732 [2024-06-11 03:55:56.896555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.732 [2024-06-11 03:55:56.896561] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.732 [2024-06-11 03:55:56.896567] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.732 [2024-06-11 03:55:56.896581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.732 qpair failed and we were unable to recover it. 00:59:15.732 [2024-06-11 03:55:56.906551] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.732 [2024-06-11 03:55:56.906622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.732 [2024-06-11 03:55:56.906641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.732 [2024-06-11 03:55:56.906647] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.732 [2024-06-11 03:55:56.906653] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.732 [2024-06-11 03:55:56.906669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.732 qpair failed and we were unable to recover it. 00:59:15.732 [2024-06-11 03:55:56.916601] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.732 [2024-06-11 03:55:56.916707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.732 [2024-06-11 03:55:56.916722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.732 [2024-06-11 03:55:56.916729] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.732 [2024-06-11 03:55:56.916735] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.732 [2024-06-11 03:55:56.916750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.732 qpair failed and we were unable to recover it. 00:59:15.732 [2024-06-11 03:55:56.926675] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.732 [2024-06-11 03:55:56.926752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.732 [2024-06-11 03:55:56.926768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.732 [2024-06-11 03:55:56.926774] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.732 [2024-06-11 03:55:56.926780] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.732 [2024-06-11 03:55:56.926795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.732 qpair failed and we were unable to recover it. 00:59:15.732 [2024-06-11 03:55:56.936643] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.732 [2024-06-11 03:55:56.936702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.732 [2024-06-11 03:55:56.936717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.732 [2024-06-11 03:55:56.936723] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.732 [2024-06-11 03:55:56.936729] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.732 [2024-06-11 03:55:56.936744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.732 qpair failed and we were unable to recover it. 00:59:15.732 [2024-06-11 03:55:56.946643] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.732 [2024-06-11 03:55:56.946732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.732 [2024-06-11 03:55:56.946747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.732 [2024-06-11 03:55:56.946753] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.732 [2024-06-11 03:55:56.946759] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.732 [2024-06-11 03:55:56.946776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.732 qpair failed and we were unable to recover it. 00:59:15.732 [2024-06-11 03:55:56.956634] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.732 [2024-06-11 03:55:56.956705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.732 [2024-06-11 03:55:56.956720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.732 [2024-06-11 03:55:56.956727] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.732 [2024-06-11 03:55:56.956734] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.732 [2024-06-11 03:55:56.956748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.732 qpair failed and we were unable to recover it. 00:59:15.732 [2024-06-11 03:55:56.966695] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.732 [2024-06-11 03:55:56.966757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.732 [2024-06-11 03:55:56.966771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.732 [2024-06-11 03:55:56.966778] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.732 [2024-06-11 03:55:56.966784] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.732 [2024-06-11 03:55:56.966798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.732 qpair failed and we were unable to recover it. 00:59:15.732 [2024-06-11 03:55:56.976785] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.732 [2024-06-11 03:55:56.976841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.732 [2024-06-11 03:55:56.976855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.732 [2024-06-11 03:55:56.976861] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.732 [2024-06-11 03:55:56.976867] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.732 [2024-06-11 03:55:56.976881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.732 qpair failed and we were unable to recover it. 00:59:15.732 [2024-06-11 03:55:56.986757] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.732 [2024-06-11 03:55:56.986824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.732 [2024-06-11 03:55:56.986839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.732 [2024-06-11 03:55:56.986845] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.732 [2024-06-11 03:55:56.986851] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.732 [2024-06-11 03:55:56.986865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.732 qpair failed and we were unable to recover it. 00:59:15.732 [2024-06-11 03:55:56.996777] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.732 [2024-06-11 03:55:56.996841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.732 [2024-06-11 03:55:56.996859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.732 [2024-06-11 03:55:56.996865] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.732 [2024-06-11 03:55:56.996871] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.732 [2024-06-11 03:55:56.996886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.732 qpair failed and we were unable to recover it. 00:59:15.732 [2024-06-11 03:55:57.006799] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.732 [2024-06-11 03:55:57.006861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.732 [2024-06-11 03:55:57.006876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.732 [2024-06-11 03:55:57.006882] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.732 [2024-06-11 03:55:57.006889] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.732 [2024-06-11 03:55:57.006903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.732 qpair failed and we were unable to recover it. 00:59:15.733 [2024-06-11 03:55:57.016873] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.733 [2024-06-11 03:55:57.016932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.733 [2024-06-11 03:55:57.016946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.733 [2024-06-11 03:55:57.016953] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.733 [2024-06-11 03:55:57.016959] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.733 [2024-06-11 03:55:57.016973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.733 qpair failed and we were unable to recover it. 00:59:15.733 [2024-06-11 03:55:57.026896] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.733 [2024-06-11 03:55:57.026955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.733 [2024-06-11 03:55:57.026970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.733 [2024-06-11 03:55:57.026976] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.733 [2024-06-11 03:55:57.026982] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.733 [2024-06-11 03:55:57.026997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.733 qpair failed and we were unable to recover it. 00:59:15.733 [2024-06-11 03:55:57.036914] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.733 [2024-06-11 03:55:57.036974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.733 [2024-06-11 03:55:57.036988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.733 [2024-06-11 03:55:57.036994] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.733 [2024-06-11 03:55:57.037000] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.733 [2024-06-11 03:55:57.037021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.733 qpair failed and we were unable to recover it. 00:59:15.733 [2024-06-11 03:55:57.046933] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.733 [2024-06-11 03:55:57.046994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.733 [2024-06-11 03:55:57.047008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.733 [2024-06-11 03:55:57.047021] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.733 [2024-06-11 03:55:57.047028] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.733 [2024-06-11 03:55:57.047043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.733 qpair failed and we were unable to recover it. 00:59:15.733 [2024-06-11 03:55:57.056966] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.733 [2024-06-11 03:55:57.057035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.733 [2024-06-11 03:55:57.057050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.733 [2024-06-11 03:55:57.057057] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.733 [2024-06-11 03:55:57.057063] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.733 [2024-06-11 03:55:57.057078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.733 qpair failed and we were unable to recover it. 00:59:15.733 [2024-06-11 03:55:57.066985] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.733 [2024-06-11 03:55:57.067049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.733 [2024-06-11 03:55:57.067064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.733 [2024-06-11 03:55:57.067071] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.733 [2024-06-11 03:55:57.067078] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.733 [2024-06-11 03:55:57.067093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.733 qpair failed and we were unable to recover it. 00:59:15.733 [2024-06-11 03:55:57.077028] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.733 [2024-06-11 03:55:57.077090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.733 [2024-06-11 03:55:57.077105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.733 [2024-06-11 03:55:57.077112] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.733 [2024-06-11 03:55:57.077118] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.733 [2024-06-11 03:55:57.077132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.733 qpair failed and we were unable to recover it. 00:59:15.733 [2024-06-11 03:55:57.087039] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.733 [2024-06-11 03:55:57.087107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.733 [2024-06-11 03:55:57.087121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.733 [2024-06-11 03:55:57.087128] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.733 [2024-06-11 03:55:57.087134] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.733 [2024-06-11 03:55:57.087148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.733 qpair failed and we were unable to recover it. 00:59:15.733 [2024-06-11 03:55:57.097061] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.733 [2024-06-11 03:55:57.097122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.733 [2024-06-11 03:55:57.097137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.733 [2024-06-11 03:55:57.097143] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.733 [2024-06-11 03:55:57.097150] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.733 [2024-06-11 03:55:57.097164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.733 qpair failed and we were unable to recover it. 00:59:15.733 [2024-06-11 03:55:57.107148] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.733 [2024-06-11 03:55:57.107215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.733 [2024-06-11 03:55:57.107229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.733 [2024-06-11 03:55:57.107236] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.733 [2024-06-11 03:55:57.107242] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.733 [2024-06-11 03:55:57.107256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.733 qpair failed and we were unable to recover it. 00:59:15.733 [2024-06-11 03:55:57.117122] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.733 [2024-06-11 03:55:57.117183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.733 [2024-06-11 03:55:57.117197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.733 [2024-06-11 03:55:57.117204] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.733 [2024-06-11 03:55:57.117210] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.733 [2024-06-11 03:55:57.117224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.733 qpair failed and we were unable to recover it. 00:59:15.733 [2024-06-11 03:55:57.127177] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.733 [2024-06-11 03:55:57.127241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.733 [2024-06-11 03:55:57.127255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.733 [2024-06-11 03:55:57.127262] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.733 [2024-06-11 03:55:57.127271] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.733 [2024-06-11 03:55:57.127285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.733 qpair failed and we were unable to recover it. 00:59:15.992 [2024-06-11 03:55:57.137199] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.993 [2024-06-11 03:55:57.137268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.993 [2024-06-11 03:55:57.137282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.993 [2024-06-11 03:55:57.137288] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.993 [2024-06-11 03:55:57.137294] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.993 [2024-06-11 03:55:57.137308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.993 qpair failed and we were unable to recover it. 00:59:15.993 [2024-06-11 03:55:57.147191] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.993 [2024-06-11 03:55:57.147260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.993 [2024-06-11 03:55:57.147275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.993 [2024-06-11 03:55:57.147281] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.993 [2024-06-11 03:55:57.147287] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.993 [2024-06-11 03:55:57.147302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.993 qpair failed and we were unable to recover it. 00:59:15.993 [2024-06-11 03:55:57.157252] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.993 [2024-06-11 03:55:57.157315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.993 [2024-06-11 03:55:57.157330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.993 [2024-06-11 03:55:57.157336] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.993 [2024-06-11 03:55:57.157342] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.993 [2024-06-11 03:55:57.157357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.993 qpair failed and we were unable to recover it. 00:59:15.993 [2024-06-11 03:55:57.167342] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.993 [2024-06-11 03:55:57.167436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.993 [2024-06-11 03:55:57.167451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.993 [2024-06-11 03:55:57.167458] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.993 [2024-06-11 03:55:57.167464] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.993 [2024-06-11 03:55:57.167479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.993 qpair failed and we were unable to recover it. 00:59:15.993 [2024-06-11 03:55:57.177296] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.993 [2024-06-11 03:55:57.177360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.993 [2024-06-11 03:55:57.177375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.993 [2024-06-11 03:55:57.177381] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.993 [2024-06-11 03:55:57.177387] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.993 [2024-06-11 03:55:57.177402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.993 qpair failed and we were unable to recover it. 00:59:15.993 [2024-06-11 03:55:57.187401] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.993 [2024-06-11 03:55:57.187464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.993 [2024-06-11 03:55:57.187479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.993 [2024-06-11 03:55:57.187486] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.993 [2024-06-11 03:55:57.187492] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.993 [2024-06-11 03:55:57.187506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.993 qpair failed and we were unable to recover it. 00:59:15.993 [2024-06-11 03:55:57.197363] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.993 [2024-06-11 03:55:57.197425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.993 [2024-06-11 03:55:57.197440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.993 [2024-06-11 03:55:57.197446] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.993 [2024-06-11 03:55:57.197452] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.993 [2024-06-11 03:55:57.197467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.993 qpair failed and we were unable to recover it. 00:59:15.993 [2024-06-11 03:55:57.207399] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.993 [2024-06-11 03:55:57.207461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.993 [2024-06-11 03:55:57.207476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.993 [2024-06-11 03:55:57.207482] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.993 [2024-06-11 03:55:57.207488] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.993 [2024-06-11 03:55:57.207502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.993 qpair failed and we were unable to recover it. 00:59:15.993 [2024-06-11 03:55:57.217442] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.993 [2024-06-11 03:55:57.217502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.993 [2024-06-11 03:55:57.217517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.993 [2024-06-11 03:55:57.217527] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.993 [2024-06-11 03:55:57.217533] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.993 [2024-06-11 03:55:57.217548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.993 qpair failed and we were unable to recover it. 00:59:15.993 [2024-06-11 03:55:57.227451] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.993 [2024-06-11 03:55:57.227524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.993 [2024-06-11 03:55:57.227543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.993 [2024-06-11 03:55:57.227550] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.993 [2024-06-11 03:55:57.227555] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.993 [2024-06-11 03:55:57.227569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.993 qpair failed and we were unable to recover it. 00:59:15.993 [2024-06-11 03:55:57.237526] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.993 [2024-06-11 03:55:57.237591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.993 [2024-06-11 03:55:57.237606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.993 [2024-06-11 03:55:57.237613] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.993 [2024-06-11 03:55:57.237619] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.993 [2024-06-11 03:55:57.237633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.993 qpair failed and we were unable to recover it. 00:59:15.993 [2024-06-11 03:55:57.247488] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.993 [2024-06-11 03:55:57.247549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.993 [2024-06-11 03:55:57.247565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.993 [2024-06-11 03:55:57.247571] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.993 [2024-06-11 03:55:57.247577] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.993 [2024-06-11 03:55:57.247592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.993 qpair failed and we were unable to recover it. 00:59:15.993 [2024-06-11 03:55:57.257572] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.993 [2024-06-11 03:55:57.257635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.993 [2024-06-11 03:55:57.257650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.993 [2024-06-11 03:55:57.257656] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.993 [2024-06-11 03:55:57.257662] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.993 [2024-06-11 03:55:57.257677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.993 qpair failed and we were unable to recover it. 00:59:15.993 [2024-06-11 03:55:57.267521] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.993 [2024-06-11 03:55:57.267585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.994 [2024-06-11 03:55:57.267600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.994 [2024-06-11 03:55:57.267607] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.994 [2024-06-11 03:55:57.267613] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.994 [2024-06-11 03:55:57.267627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.994 qpair failed and we were unable to recover it. 00:59:15.994 [2024-06-11 03:55:57.277617] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.994 [2024-06-11 03:55:57.277686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.994 [2024-06-11 03:55:57.277700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.994 [2024-06-11 03:55:57.277707] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.994 [2024-06-11 03:55:57.277713] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.994 [2024-06-11 03:55:57.277728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.994 qpair failed and we were unable to recover it. 00:59:15.994 [2024-06-11 03:55:57.287635] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.994 [2024-06-11 03:55:57.287693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.994 [2024-06-11 03:55:57.287708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.994 [2024-06-11 03:55:57.287715] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.994 [2024-06-11 03:55:57.287721] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.994 [2024-06-11 03:55:57.287735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.994 qpair failed and we were unable to recover it. 00:59:15.994 [2024-06-11 03:55:57.297601] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.994 [2024-06-11 03:55:57.297663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.994 [2024-06-11 03:55:57.297677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.994 [2024-06-11 03:55:57.297683] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.994 [2024-06-11 03:55:57.297689] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.994 [2024-06-11 03:55:57.297703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.994 qpair failed and we were unable to recover it. 00:59:15.994 [2024-06-11 03:55:57.307720] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.994 [2024-06-11 03:55:57.307782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.994 [2024-06-11 03:55:57.307796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.994 [2024-06-11 03:55:57.307805] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.994 [2024-06-11 03:55:57.307811] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.994 [2024-06-11 03:55:57.307825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.994 qpair failed and we were unable to recover it. 00:59:15.994 [2024-06-11 03:55:57.317735] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.994 [2024-06-11 03:55:57.317805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.994 [2024-06-11 03:55:57.317820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.994 [2024-06-11 03:55:57.317826] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.994 [2024-06-11 03:55:57.317832] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.994 [2024-06-11 03:55:57.317846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.994 qpair failed and we were unable to recover it. 00:59:15.994 [2024-06-11 03:55:57.327699] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.994 [2024-06-11 03:55:57.327788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.994 [2024-06-11 03:55:57.327803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.994 [2024-06-11 03:55:57.327810] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.994 [2024-06-11 03:55:57.327816] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.994 [2024-06-11 03:55:57.327830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.994 qpair failed and we were unable to recover it. 00:59:15.994 [2024-06-11 03:55:57.337786] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.994 [2024-06-11 03:55:57.337845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.994 [2024-06-11 03:55:57.337860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.994 [2024-06-11 03:55:57.337867] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.994 [2024-06-11 03:55:57.337872] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.994 [2024-06-11 03:55:57.337886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.994 qpair failed and we were unable to recover it. 00:59:15.994 [2024-06-11 03:55:57.347853] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.994 [2024-06-11 03:55:57.347916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.994 [2024-06-11 03:55:57.347931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.994 [2024-06-11 03:55:57.347937] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.994 [2024-06-11 03:55:57.347943] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.994 [2024-06-11 03:55:57.347957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.994 qpair failed and we were unable to recover it. 00:59:15.994 [2024-06-11 03:55:57.357894] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.994 [2024-06-11 03:55:57.358005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.994 [2024-06-11 03:55:57.358025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.994 [2024-06-11 03:55:57.358032] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.994 [2024-06-11 03:55:57.358038] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.994 [2024-06-11 03:55:57.358052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.994 qpair failed and we were unable to recover it. 00:59:15.994 [2024-06-11 03:55:57.367865] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.994 [2024-06-11 03:55:57.367928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.994 [2024-06-11 03:55:57.367943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.994 [2024-06-11 03:55:57.367950] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.994 [2024-06-11 03:55:57.367956] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.994 [2024-06-11 03:55:57.367970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.994 qpair failed and we were unable to recover it. 00:59:15.994 [2024-06-11 03:55:57.377840] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.994 [2024-06-11 03:55:57.377946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.994 [2024-06-11 03:55:57.377961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.994 [2024-06-11 03:55:57.377968] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.994 [2024-06-11 03:55:57.377973] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.994 [2024-06-11 03:55:57.377988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.994 qpair failed and we were unable to recover it. 00:59:15.994 [2024-06-11 03:55:57.387939] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:15.994 [2024-06-11 03:55:57.387999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:15.994 [2024-06-11 03:55:57.388018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:15.994 [2024-06-11 03:55:57.388025] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:15.994 [2024-06-11 03:55:57.388030] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:15.994 [2024-06-11 03:55:57.388045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:15.994 qpair failed and we were unable to recover it. 00:59:16.253 [2024-06-11 03:55:57.398033] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.253 [2024-06-11 03:55:57.398117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.253 [2024-06-11 03:55:57.398135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.253 [2024-06-11 03:55:57.398142] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.253 [2024-06-11 03:55:57.398148] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.253 [2024-06-11 03:55:57.398163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.253 qpair failed and we were unable to recover it. 00:59:16.253 [2024-06-11 03:55:57.408043] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.253 [2024-06-11 03:55:57.408122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.253 [2024-06-11 03:55:57.408137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.253 [2024-06-11 03:55:57.408144] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.253 [2024-06-11 03:55:57.408149] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.253 [2024-06-11 03:55:57.408164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.253 qpair failed and we were unable to recover it. 00:59:16.253 [2024-06-11 03:55:57.418020] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.253 [2024-06-11 03:55:57.418087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.253 [2024-06-11 03:55:57.418101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.253 [2024-06-11 03:55:57.418108] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.253 [2024-06-11 03:55:57.418114] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.253 [2024-06-11 03:55:57.418128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.253 qpair failed and we were unable to recover it. 00:59:16.253 [2024-06-11 03:55:57.428080] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.253 [2024-06-11 03:55:57.428192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.253 [2024-06-11 03:55:57.428207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.253 [2024-06-11 03:55:57.428214] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.253 [2024-06-11 03:55:57.428219] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.253 [2024-06-11 03:55:57.428234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.253 qpair failed and we were unable to recover it. 00:59:16.253 [2024-06-11 03:55:57.438120] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.253 [2024-06-11 03:55:57.438230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.253 [2024-06-11 03:55:57.438246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.253 [2024-06-11 03:55:57.438252] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.253 [2024-06-11 03:55:57.438258] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.253 [2024-06-11 03:55:57.438278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.253 qpair failed and we were unable to recover it. 00:59:16.253 [2024-06-11 03:55:57.448111] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.253 [2024-06-11 03:55:57.448174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.253 [2024-06-11 03:55:57.448189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.253 [2024-06-11 03:55:57.448195] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.253 [2024-06-11 03:55:57.448201] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.253 [2024-06-11 03:55:57.448216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.253 qpair failed and we were unable to recover it. 00:59:16.253 [2024-06-11 03:55:57.458136] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.253 [2024-06-11 03:55:57.458212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.253 [2024-06-11 03:55:57.458228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.253 [2024-06-11 03:55:57.458234] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.253 [2024-06-11 03:55:57.458240] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.253 [2024-06-11 03:55:57.458255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.253 qpair failed and we were unable to recover it. 00:59:16.253 [2024-06-11 03:55:57.468228] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.253 [2024-06-11 03:55:57.468294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.253 [2024-06-11 03:55:57.468308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.253 [2024-06-11 03:55:57.468315] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.253 [2024-06-11 03:55:57.468321] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.253 [2024-06-11 03:55:57.468336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.253 qpair failed and we were unable to recover it. 00:59:16.253 [2024-06-11 03:55:57.478125] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.253 [2024-06-11 03:55:57.478190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.253 [2024-06-11 03:55:57.478205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.253 [2024-06-11 03:55:57.478211] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.254 [2024-06-11 03:55:57.478217] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.254 [2024-06-11 03:55:57.478231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.254 qpair failed and we were unable to recover it. 00:59:16.254 [2024-06-11 03:55:57.488149] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.254 [2024-06-11 03:55:57.488209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.254 [2024-06-11 03:55:57.488226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.254 [2024-06-11 03:55:57.488233] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.254 [2024-06-11 03:55:57.488239] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.254 [2024-06-11 03:55:57.488252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.254 qpair failed and we were unable to recover it. 00:59:16.254 [2024-06-11 03:55:57.498270] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.254 [2024-06-11 03:55:57.498334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.254 [2024-06-11 03:55:57.498348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.254 [2024-06-11 03:55:57.498355] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.254 [2024-06-11 03:55:57.498361] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.254 [2024-06-11 03:55:57.498376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.254 qpair failed and we were unable to recover it. 00:59:16.254 [2024-06-11 03:55:57.508294] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.254 [2024-06-11 03:55:57.508363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.254 [2024-06-11 03:55:57.508378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.254 [2024-06-11 03:55:57.508385] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.254 [2024-06-11 03:55:57.508392] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.254 [2024-06-11 03:55:57.508406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.254 qpair failed and we were unable to recover it. 00:59:16.254 [2024-06-11 03:55:57.518345] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.254 [2024-06-11 03:55:57.518452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.254 [2024-06-11 03:55:57.518467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.254 [2024-06-11 03:55:57.518474] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.254 [2024-06-11 03:55:57.518480] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.254 [2024-06-11 03:55:57.518494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.254 qpair failed and we were unable to recover it. 00:59:16.254 [2024-06-11 03:55:57.528382] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.254 [2024-06-11 03:55:57.528444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.254 [2024-06-11 03:55:57.528457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.254 [2024-06-11 03:55:57.528464] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.254 [2024-06-11 03:55:57.528473] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.254 [2024-06-11 03:55:57.528488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.254 qpair failed and we were unable to recover it. 00:59:16.254 [2024-06-11 03:55:57.538427] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.254 [2024-06-11 03:55:57.538488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.254 [2024-06-11 03:55:57.538503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.254 [2024-06-11 03:55:57.538509] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.254 [2024-06-11 03:55:57.538515] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.254 [2024-06-11 03:55:57.538529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.254 qpair failed and we were unable to recover it. 00:59:16.254 [2024-06-11 03:55:57.548341] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.254 [2024-06-11 03:55:57.548426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.254 [2024-06-11 03:55:57.548441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.254 [2024-06-11 03:55:57.548448] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.254 [2024-06-11 03:55:57.548454] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.254 [2024-06-11 03:55:57.548468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.254 qpair failed and we were unable to recover it. 00:59:16.254 [2024-06-11 03:55:57.558412] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.254 [2024-06-11 03:55:57.558472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.254 [2024-06-11 03:55:57.558487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.254 [2024-06-11 03:55:57.558493] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.254 [2024-06-11 03:55:57.558499] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.254 [2024-06-11 03:55:57.558513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.254 qpair failed and we were unable to recover it. 00:59:16.254 [2024-06-11 03:55:57.568465] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.254 [2024-06-11 03:55:57.568559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.254 [2024-06-11 03:55:57.568574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.254 [2024-06-11 03:55:57.568581] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.254 [2024-06-11 03:55:57.568587] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.254 [2024-06-11 03:55:57.568602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.254 qpair failed and we were unable to recover it. 00:59:16.254 [2024-06-11 03:55:57.578528] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.254 [2024-06-11 03:55:57.578643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.254 [2024-06-11 03:55:57.578659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.254 [2024-06-11 03:55:57.578666] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.254 [2024-06-11 03:55:57.578672] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.254 [2024-06-11 03:55:57.578686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.254 qpair failed and we were unable to recover it. 00:59:16.254 [2024-06-11 03:55:57.588504] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.254 [2024-06-11 03:55:57.588566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.254 [2024-06-11 03:55:57.588580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.254 [2024-06-11 03:55:57.588587] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.254 [2024-06-11 03:55:57.588593] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.254 [2024-06-11 03:55:57.588607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.254 qpair failed and we were unable to recover it. 00:59:16.254 [2024-06-11 03:55:57.598522] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.254 [2024-06-11 03:55:57.598587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.254 [2024-06-11 03:55:57.598601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.254 [2024-06-11 03:55:57.598607] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.254 [2024-06-11 03:55:57.598613] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.254 [2024-06-11 03:55:57.598628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.254 qpair failed and we were unable to recover it. 00:59:16.254 [2024-06-11 03:55:57.608576] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.254 [2024-06-11 03:55:57.608643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.254 [2024-06-11 03:55:57.608657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.254 [2024-06-11 03:55:57.608664] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.254 [2024-06-11 03:55:57.608670] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.254 [2024-06-11 03:55:57.608684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.254 qpair failed and we were unable to recover it. 00:59:16.254 [2024-06-11 03:55:57.618633] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.255 [2024-06-11 03:55:57.618695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.255 [2024-06-11 03:55:57.618710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.255 [2024-06-11 03:55:57.618719] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.255 [2024-06-11 03:55:57.618725] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.255 [2024-06-11 03:55:57.618740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.255 qpair failed and we were unable to recover it. 00:59:16.255 [2024-06-11 03:55:57.628672] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.255 [2024-06-11 03:55:57.628774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.255 [2024-06-11 03:55:57.628789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.255 [2024-06-11 03:55:57.628796] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.255 [2024-06-11 03:55:57.628802] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.255 [2024-06-11 03:55:57.628815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.255 qpair failed and we were unable to recover it. 00:59:16.255 [2024-06-11 03:55:57.638633] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.255 [2024-06-11 03:55:57.638697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.255 [2024-06-11 03:55:57.638712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.255 [2024-06-11 03:55:57.638718] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.255 [2024-06-11 03:55:57.638724] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.255 [2024-06-11 03:55:57.638739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.255 qpair failed and we were unable to recover it. 00:59:16.255 [2024-06-11 03:55:57.648752] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.255 [2024-06-11 03:55:57.648809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.255 [2024-06-11 03:55:57.648823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.255 [2024-06-11 03:55:57.648830] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.255 [2024-06-11 03:55:57.648836] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.255 [2024-06-11 03:55:57.648850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.255 qpair failed and we were unable to recover it. 00:59:16.513 [2024-06-11 03:55:57.658717] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.513 [2024-06-11 03:55:57.658824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.513 [2024-06-11 03:55:57.658839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.513 [2024-06-11 03:55:57.658846] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.513 [2024-06-11 03:55:57.658852] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.513 [2024-06-11 03:55:57.658866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.513 qpair failed and we were unable to recover it. 00:59:16.513 [2024-06-11 03:55:57.668744] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.513 [2024-06-11 03:55:57.668859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.513 [2024-06-11 03:55:57.668875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.513 [2024-06-11 03:55:57.668882] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.513 [2024-06-11 03:55:57.668888] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.513 [2024-06-11 03:55:57.668902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.513 qpair failed and we were unable to recover it. 00:59:16.513 [2024-06-11 03:55:57.678751] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.513 [2024-06-11 03:55:57.678815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.513 [2024-06-11 03:55:57.678830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.513 [2024-06-11 03:55:57.678836] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.513 [2024-06-11 03:55:57.678842] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.513 [2024-06-11 03:55:57.678857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.513 qpair failed and we were unable to recover it. 00:59:16.513 [2024-06-11 03:55:57.688832] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.513 [2024-06-11 03:55:57.688941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.513 [2024-06-11 03:55:57.688956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.513 [2024-06-11 03:55:57.688963] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.513 [2024-06-11 03:55:57.688969] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.513 [2024-06-11 03:55:57.688984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.513 qpair failed and we were unable to recover it. 00:59:16.513 [2024-06-11 03:55:57.698818] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.513 [2024-06-11 03:55:57.698876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.513 [2024-06-11 03:55:57.698891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.513 [2024-06-11 03:55:57.698897] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.513 [2024-06-11 03:55:57.698903] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.513 [2024-06-11 03:55:57.698917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.513 qpair failed and we were unable to recover it. 00:59:16.513 [2024-06-11 03:55:57.708840] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.513 [2024-06-11 03:55:57.708899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.513 [2024-06-11 03:55:57.708913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.513 [2024-06-11 03:55:57.708922] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.513 [2024-06-11 03:55:57.708928] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.513 [2024-06-11 03:55:57.708942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.513 qpair failed and we were unable to recover it. 00:59:16.513 [2024-06-11 03:55:57.718906] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.513 [2024-06-11 03:55:57.718968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.513 [2024-06-11 03:55:57.718982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.513 [2024-06-11 03:55:57.718988] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.513 [2024-06-11 03:55:57.718994] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.513 [2024-06-11 03:55:57.719012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.513 qpair failed and we were unable to recover it. 00:59:16.513 [2024-06-11 03:55:57.728917] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.513 [2024-06-11 03:55:57.729002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.513 [2024-06-11 03:55:57.729021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.513 [2024-06-11 03:55:57.729027] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.513 [2024-06-11 03:55:57.729034] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.513 [2024-06-11 03:55:57.729048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.513 qpair failed and we were unable to recover it. 00:59:16.513 [2024-06-11 03:55:57.738918] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.513 [2024-06-11 03:55:57.739017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.513 [2024-06-11 03:55:57.739033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.513 [2024-06-11 03:55:57.739040] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.513 [2024-06-11 03:55:57.739046] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.513 [2024-06-11 03:55:57.739060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.513 qpair failed and we were unable to recover it. 00:59:16.514 [2024-06-11 03:55:57.748974] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.514 [2024-06-11 03:55:57.749040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.514 [2024-06-11 03:55:57.749055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.514 [2024-06-11 03:55:57.749062] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.514 [2024-06-11 03:55:57.749068] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.514 [2024-06-11 03:55:57.749082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.514 qpair failed and we were unable to recover it. 00:59:16.514 [2024-06-11 03:55:57.758979] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.514 [2024-06-11 03:55:57.759038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.514 [2024-06-11 03:55:57.759052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.514 [2024-06-11 03:55:57.759059] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.514 [2024-06-11 03:55:57.759065] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.514 [2024-06-11 03:55:57.759078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.514 qpair failed and we were unable to recover it. 00:59:16.514 [2024-06-11 03:55:57.768946] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.514 [2024-06-11 03:55:57.769008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.514 [2024-06-11 03:55:57.769026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.514 [2024-06-11 03:55:57.769033] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.514 [2024-06-11 03:55:57.769039] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.514 [2024-06-11 03:55:57.769053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.514 qpair failed and we were unable to recover it. 00:59:16.514 [2024-06-11 03:55:57.779055] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.514 [2024-06-11 03:55:57.779115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.514 [2024-06-11 03:55:57.779130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.514 [2024-06-11 03:55:57.779137] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.514 [2024-06-11 03:55:57.779143] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.514 [2024-06-11 03:55:57.779158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.514 qpair failed and we were unable to recover it. 00:59:16.514 [2024-06-11 03:55:57.789095] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.514 [2024-06-11 03:55:57.789161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.514 [2024-06-11 03:55:57.789175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.514 [2024-06-11 03:55:57.789182] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.514 [2024-06-11 03:55:57.789188] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.514 [2024-06-11 03:55:57.789202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.514 qpair failed and we were unable to recover it. 00:59:16.514 [2024-06-11 03:55:57.799090] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.514 [2024-06-11 03:55:57.799154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.514 [2024-06-11 03:55:57.799172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.514 [2024-06-11 03:55:57.799179] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.514 [2024-06-11 03:55:57.799185] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.514 [2024-06-11 03:55:57.799201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.514 qpair failed and we were unable to recover it. 00:59:16.514 [2024-06-11 03:55:57.809153] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.514 [2024-06-11 03:55:57.809259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.514 [2024-06-11 03:55:57.809274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.514 [2024-06-11 03:55:57.809281] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.514 [2024-06-11 03:55:57.809287] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.514 [2024-06-11 03:55:57.809301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.514 qpair failed and we were unable to recover it. 00:59:16.514 [2024-06-11 03:55:57.819201] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.514 [2024-06-11 03:55:57.819261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.514 [2024-06-11 03:55:57.819275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.514 [2024-06-11 03:55:57.819282] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.514 [2024-06-11 03:55:57.819288] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.514 [2024-06-11 03:55:57.819303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.514 qpair failed and we were unable to recover it. 00:59:16.514 [2024-06-11 03:55:57.829223] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.514 [2024-06-11 03:55:57.829295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.514 [2024-06-11 03:55:57.829312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.514 [2024-06-11 03:55:57.829319] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.514 [2024-06-11 03:55:57.829325] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.514 [2024-06-11 03:55:57.829340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.514 qpair failed and we were unable to recover it. 00:59:16.514 [2024-06-11 03:55:57.839262] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.514 [2024-06-11 03:55:57.839333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.514 [2024-06-11 03:55:57.839347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.514 [2024-06-11 03:55:57.839353] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.514 [2024-06-11 03:55:57.839360] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.514 [2024-06-11 03:55:57.839377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.514 qpair failed and we were unable to recover it. 00:59:16.514 [2024-06-11 03:55:57.849275] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.514 [2024-06-11 03:55:57.849350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.514 [2024-06-11 03:55:57.849365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.514 [2024-06-11 03:55:57.849371] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.514 [2024-06-11 03:55:57.849377] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.514 [2024-06-11 03:55:57.849392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.514 qpair failed and we were unable to recover it. 00:59:16.514 [2024-06-11 03:55:57.859224] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.514 [2024-06-11 03:55:57.859282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.514 [2024-06-11 03:55:57.859296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.514 [2024-06-11 03:55:57.859303] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.514 [2024-06-11 03:55:57.859309] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.514 [2024-06-11 03:55:57.859323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.514 qpair failed and we were unable to recover it. 00:59:16.514 [2024-06-11 03:55:57.869329] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.514 [2024-06-11 03:55:57.869435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.514 [2024-06-11 03:55:57.869451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.514 [2024-06-11 03:55:57.869457] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.514 [2024-06-11 03:55:57.869463] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.514 [2024-06-11 03:55:57.869477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.514 qpair failed and we were unable to recover it. 00:59:16.514 [2024-06-11 03:55:57.879319] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.514 [2024-06-11 03:55:57.879381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.514 [2024-06-11 03:55:57.879396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.515 [2024-06-11 03:55:57.879402] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.515 [2024-06-11 03:55:57.879408] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.515 [2024-06-11 03:55:57.879422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.515 qpair failed and we were unable to recover it. 00:59:16.515 [2024-06-11 03:55:57.889350] ctrlr.c: 759:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:59:16.515 [2024-06-11 03:55:57.889409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:59:16.515 [2024-06-11 03:55:57.889427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:59:16.515 [2024-06-11 03:55:57.889434] nvme_tcp.c:2430:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:59:16.515 [2024-06-11 03:55:57.889440] nvme_tcp.c:2220:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f01b8000b90 00:59:16.515 [2024-06-11 03:55:57.889455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:59:16.515 qpair failed and we were unable to recover it. 00:59:16.515 [2024-06-11 03:55:57.889591] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:59:16.515 A controller has encountered a failure and is being reset. 00:59:16.771 Controller properly reset. 00:59:16.771 Initializing NVMe Controllers 00:59:16.771 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:59:16.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:59:16.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:59:16.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:59:16.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:59:16.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:59:16.772 Initialization complete. Launching workers. 00:59:16.772 Starting thread on core 1 00:59:16.772 Starting thread on core 2 00:59:16.772 Starting thread on core 3 00:59:16.772 Starting thread on core 0 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:59:16.772 00:59:16.772 real 0m11.314s 00:59:16.772 user 0m21.489s 00:59:16.772 sys 0m4.321s 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:59:16.772 ************************************ 00:59:16.772 END TEST nvmf_target_disconnect_tc2 00:59:16.772 ************************************ 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:59:16.772 rmmod nvme_tcp 00:59:16.772 rmmod nvme_fabrics 00:59:16.772 rmmod nvme_keyring 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2413482 ']' 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2413482 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@949 -- # '[' -z 2413482 ']' 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # kill -0 2413482 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # uname 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:59:16.772 03:55:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2413482 00:59:17.029 03:55:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_4 00:59:17.029 03:55:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_4 = sudo ']' 00:59:17.029 03:55:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2413482' 00:59:17.029 killing process with pid 2413482 00:59:17.029 03:55:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # kill 2413482 00:59:17.029 03:55:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # wait 2413482 00:59:17.029 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:59:17.029 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:59:17.029 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:59:17.029 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:59:17.029 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:59:17.029 03:55:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:17.029 03:55:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:59:17.029 03:55:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:19.560 03:56:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:59:19.560 00:59:19.560 real 0m20.371s 00:59:19.560 user 0m49.013s 00:59:19.560 sys 0m9.501s 00:59:19.560 03:56:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:59:19.560 03:56:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:59:19.560 ************************************ 00:59:19.560 END TEST nvmf_target_disconnect 00:59:19.560 ************************************ 00:59:19.560 03:56:00 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:59:19.560 03:56:00 nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:59:19.560 03:56:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:19.560 03:56:00 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:59:19.560 00:59:19.560 real 29m0.657s 00:59:19.560 user 74m3.212s 00:59:19.560 sys 7m54.233s 00:59:19.560 03:56:00 nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:59:19.560 03:56:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:19.560 ************************************ 00:59:19.560 END TEST nvmf_tcp 00:59:19.560 ************************************ 00:59:19.560 03:56:00 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:59:19.560 03:56:00 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:59:19.560 03:56:00 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:59:19.560 03:56:00 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:59:19.560 03:56:00 -- common/autotest_common.sh@10 -- # set +x 00:59:19.560 ************************************ 00:59:19.560 START TEST spdkcli_nvmf_tcp 00:59:19.560 ************************************ 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:59:19.560 * Looking for test storage... 00:59:19.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:19.560 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2415012 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2415012 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # '[' -z 2415012 ']' 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:19.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:19.561 [2024-06-11 03:56:00.716306] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:59:19.561 [2024-06-11 03:56:00.716352] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2415012 ] 00:59:19.561 EAL: No free 2048 kB hugepages reported on node 1 00:59:19.561 [2024-06-11 03:56:00.774829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:59:19.561 [2024-06-11 03:56:00.817358] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:59:19.561 [2024-06-11 03:56:00.817362] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@863 -- # return 0 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:19.561 03:56:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:59:19.561 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:59:19.561 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:59:19.561 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:59:19.561 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:59:19.561 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:59:19.561 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:59:19.561 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:59:19.561 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:59:19.561 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:59:19.561 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:59:19.561 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:59:19.561 ' 00:59:22.095 [2024-06-11 03:56:03.323063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:59:23.467 [2024-06-11 03:56:04.498978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:59:25.401 [2024-06-11 03:56:06.661553] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:59:27.303 [2024-06-11 03:56:08.519339] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:59:28.677 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:59:28.677 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:59:28.677 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:59:28.677 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:59:28.677 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:59:28.677 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:59:28.677 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:59:28.677 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:59:28.677 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:59:28.677 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:59:28.677 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:59:28.677 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:59:28.677 03:56:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:59:28.677 03:56:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:59:28.677 03:56:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:28.935 03:56:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:59:28.935 03:56:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:59:28.935 03:56:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:28.935 03:56:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:59:28.935 03:56:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:59:29.194 03:56:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:59:29.194 03:56:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:59:29.194 03:56:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:59:29.194 03:56:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:59:29.194 03:56:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:29.194 03:56:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:59:29.194 03:56:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:59:29.194 03:56:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:29.194 03:56:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:59:29.194 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:59:29.194 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:59:29.194 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:59:29.194 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:59:29.194 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:59:29.194 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:59:29.194 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:59:29.194 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:59:29.194 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:59:29.194 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:59:29.194 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:59:29.194 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:59:29.194 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:59:29.194 ' 00:59:34.459 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:59:34.460 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:59:34.460 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:59:34.460 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:59:34.460 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:59:34.460 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:59:34.460 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:59:34.460 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:59:34.460 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:59:34.460 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:59:34.460 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:59:34.460 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:59:34.460 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:59:34.460 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2415012 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 2415012 ']' 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 2415012 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # uname 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2415012 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2415012' 00:59:34.460 killing process with pid 2415012 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # kill 2415012 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # wait 2415012 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2415012 ']' 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2415012 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@949 -- # '[' -z 2415012 ']' 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # kill -0 2415012 00:59:34.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2415012) - No such process 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # echo 'Process with pid 2415012 is not found' 00:59:34.460 Process with pid 2415012 is not found 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:59:34.460 00:59:34.460 real 0m15.156s 00:59:34.460 user 0m31.377s 00:59:34.460 sys 0m0.660s 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:59:34.460 03:56:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:59:34.460 ************************************ 00:59:34.460 END TEST spdkcli_nvmf_tcp 00:59:34.460 ************************************ 00:59:34.460 03:56:15 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:59:34.460 03:56:15 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:59:34.460 03:56:15 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:59:34.460 03:56:15 -- common/autotest_common.sh@10 -- # set +x 00:59:34.460 ************************************ 00:59:34.460 START TEST nvmf_identify_passthru 00:59:34.460 ************************************ 00:59:34.460 03:56:15 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:59:34.720 * Looking for test storage... 00:59:34.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:59:34.720 03:56:15 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:59:34.720 03:56:15 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:34.720 03:56:15 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:34.720 03:56:15 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:34.720 03:56:15 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:34.720 03:56:15 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:34.720 03:56:15 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:34.720 03:56:15 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:59:34.720 03:56:15 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:59:34.720 03:56:15 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:59:34.720 03:56:15 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:34.720 03:56:15 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:34.720 03:56:15 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:34.720 03:56:15 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:34.720 03:56:15 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:34.720 03:56:15 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:34.720 03:56:15 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:59:34.720 03:56:15 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:34.720 03:56:15 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:34.720 03:56:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:59:34.720 03:56:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:59:34.720 03:56:15 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:59:34.720 03:56:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:59:41.282 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:59:41.283 Found 0000:86:00.0 (0x8086 - 0x159b) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:59:41.283 Found 0000:86:00.1 (0x8086 - 0x159b) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:59:41.283 Found net devices under 0000:86:00.0: cvl_0_0 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:59:41.283 Found net devices under 0000:86:00.1: cvl_0_1 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:59:41.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:59:41.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:59:41.283 00:59:41.283 --- 10.0.0.2 ping statistics --- 00:59:41.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:41.283 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:59:41.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:59:41.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:59:41.283 00:59:41.283 --- 10.0.0.1 ping statistics --- 00:59:41.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:59:41.283 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:59:41.283 03:56:21 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:59:41.283 03:56:21 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:59:41.283 03:56:21 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:59:41.283 03:56:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:41.283 03:56:21 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:59:41.283 03:56:21 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=() 00:59:41.283 03:56:21 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # local bdfs 00:59:41.283 03:56:21 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=($(get_nvme_bdfs)) 00:59:41.283 03:56:21 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # get_nvme_bdfs 00:59:41.283 03:56:21 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=() 00:59:41.283 03:56:21 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # local bdfs 00:59:41.283 03:56:21 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:59:41.283 03:56:21 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:59:41.283 03:56:21 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:59:41.283 03:56:22 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:59:41.283 03:56:22 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:5f:00.0 00:59:41.283 03:56:22 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # echo 0000:5f:00.0 00:59:41.283 03:56:22 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5f:00.0 00:59:41.283 03:56:22 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5f:00.0 ']' 00:59:41.283 03:56:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:59:41.283 03:56:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:59:41.283 03:56:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5f:00.0' -i 0 00:59:41.283 EAL: No free 2048 kB hugepages reported on node 1 00:59:45.464 03:56:26 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN025500DK1P6AGN 00:59:45.464 03:56:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5f:00.0' -i 0 00:59:45.464 03:56:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:59:45.464 03:56:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:59:45.464 EAL: No free 2048 kB hugepages reported on node 1 00:59:50.731 03:56:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:59:50.731 03:56:31 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:50.731 03:56:31 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@723 -- # xtrace_disable 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:50.731 03:56:31 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2422324 00:59:50.731 03:56:31 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:59:50.731 03:56:31 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2422324 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # '[' -z 2422324 ']' 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:59:50.731 03:56:31 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local max_retries=100 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:59:50.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # xtrace_disable 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:50.731 [2024-06-11 03:56:31.358303] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 00:59:50.731 [2024-06-11 03:56:31.358345] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:59:50.731 EAL: No free 2048 kB hugepages reported on node 1 00:59:50.731 [2024-06-11 03:56:31.418917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:59:50.731 [2024-06-11 03:56:31.460103] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:59:50.731 [2024-06-11 03:56:31.460139] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:59:50.731 [2024-06-11 03:56:31.460146] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:59:50.731 [2024-06-11 03:56:31.460151] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:59:50.731 [2024-06-11 03:56:31.460156] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:59:50.731 [2024-06-11 03:56:31.460204] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:59:50.731 [2024-06-11 03:56:31.460301] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:59:50.731 [2024-06-11 03:56:31.460393] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:59:50.731 [2024-06-11 03:56:31.460394] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@863 -- # return 0 00:59:50.731 03:56:31 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:50.731 INFO: Log level set to 20 00:59:50.731 INFO: Requests: 00:59:50.731 { 00:59:50.731 "jsonrpc": "2.0", 00:59:50.731 "method": "nvmf_set_config", 00:59:50.731 "id": 1, 00:59:50.731 "params": { 00:59:50.731 "admin_cmd_passthru": { 00:59:50.731 "identify_ctrlr": true 00:59:50.731 } 00:59:50.731 } 00:59:50.731 } 00:59:50.731 00:59:50.731 INFO: response: 00:59:50.731 { 00:59:50.731 "jsonrpc": "2.0", 00:59:50.731 "id": 1, 00:59:50.731 "result": true 00:59:50.731 } 00:59:50.731 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:50.731 03:56:31 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:50.731 INFO: Setting log level to 20 00:59:50.731 INFO: Setting log level to 20 00:59:50.731 INFO: Log level set to 20 00:59:50.731 INFO: Log level set to 20 00:59:50.731 INFO: Requests: 00:59:50.731 { 00:59:50.731 "jsonrpc": "2.0", 00:59:50.731 "method": "framework_start_init", 00:59:50.731 "id": 1 00:59:50.731 } 00:59:50.731 00:59:50.731 INFO: Requests: 00:59:50.731 { 00:59:50.731 "jsonrpc": "2.0", 00:59:50.731 "method": "framework_start_init", 00:59:50.731 "id": 1 00:59:50.731 } 00:59:50.731 00:59:50.731 [2024-06-11 03:56:31.564909] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:59:50.731 INFO: response: 00:59:50.731 { 00:59:50.731 "jsonrpc": "2.0", 00:59:50.731 "id": 1, 00:59:50.731 "result": true 00:59:50.731 } 00:59:50.731 00:59:50.731 INFO: response: 00:59:50.731 { 00:59:50.731 "jsonrpc": "2.0", 00:59:50.731 "id": 1, 00:59:50.731 "result": true 00:59:50.731 } 00:59:50.731 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:50.731 03:56:31 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:50.731 INFO: Setting log level to 40 00:59:50.731 INFO: Setting log level to 40 00:59:50.731 INFO: Setting log level to 40 00:59:50.731 [2024-06-11 03:56:31.574353] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:50.731 03:56:31 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@729 -- # xtrace_disable 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:50.731 03:56:31 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5f:00.0 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:50.731 03:56:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:53.260 Nvme0n1 00:59:53.260 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:53.260 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:59:53.260 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:53.260 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:53.260 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:53.260 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:59:53.260 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:53.260 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:53.260 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:53.260 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:59:53.260 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:53.260 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:53.260 [2024-06-11 03:56:34.468149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:59:53.260 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:53.260 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:59:53.260 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:53.260 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:53.260 [ 00:59:53.260 { 00:59:53.260 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:59:53.260 "subtype": "Discovery", 00:59:53.260 "listen_addresses": [], 00:59:53.260 "allow_any_host": true, 00:59:53.260 "hosts": [] 00:59:53.260 }, 00:59:53.260 { 00:59:53.260 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:59:53.260 "subtype": "NVMe", 00:59:53.260 "listen_addresses": [ 00:59:53.260 { 00:59:53.260 "trtype": "TCP", 00:59:53.260 "adrfam": "IPv4", 00:59:53.260 "traddr": "10.0.0.2", 00:59:53.260 "trsvcid": "4420" 00:59:53.260 } 00:59:53.260 ], 00:59:53.260 "allow_any_host": true, 00:59:53.260 "hosts": [], 00:59:53.260 "serial_number": "SPDK00000000000001", 00:59:53.260 "model_number": "SPDK bdev Controller", 00:59:53.260 "max_namespaces": 1, 00:59:53.260 "min_cntlid": 1, 00:59:53.260 "max_cntlid": 65519, 00:59:53.260 "namespaces": [ 00:59:53.260 { 00:59:53.260 "nsid": 1, 00:59:53.260 "bdev_name": "Nvme0n1", 00:59:53.260 "name": "Nvme0n1", 00:59:53.260 "nguid": "A9EA92887DEF4AA3A67198184F7E6B26", 00:59:53.260 "uuid": "a9ea9288-7def-4aa3-a671-98184f7e6b26" 00:59:53.260 } 00:59:53.260 ] 00:59:53.260 } 00:59:53.260 ] 00:59:53.260 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:53.260 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:59:53.260 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:59:53.260 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:59:53.260 EAL: No free 2048 kB hugepages reported on node 1 00:59:53.519 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN025500DK1P6AGN 00:59:53.519 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:59:53.519 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:59:53.519 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:59:53.519 EAL: No free 2048 kB hugepages reported on node 1 00:59:53.519 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:59:53.519 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN025500DK1P6AGN '!=' PHLN025500DK1P6AGN ']' 00:59:53.519 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:59:53.519 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:59:53.519 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:59:53.519 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:53.519 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:59:53.519 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:59:53.519 03:56:34 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:59:53.519 03:56:34 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:59:53.519 03:56:34 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:59:53.519 03:56:34 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:59:53.519 03:56:34 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:59:53.519 03:56:34 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:59:53.519 03:56:34 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:59:53.519 rmmod nvme_tcp 00:59:53.519 rmmod nvme_fabrics 00:59:53.519 rmmod nvme_keyring 00:59:53.519 03:56:34 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:59:53.519 03:56:34 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:59:53.519 03:56:34 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:59:53.519 03:56:34 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2422324 ']' 00:59:53.519 03:56:34 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2422324 00:59:53.519 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@949 -- # '[' -z 2422324 ']' 00:59:53.519 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # kill -0 2422324 00:59:53.519 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # uname 00:59:53.519 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:59:53.519 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2422324 00:59:53.778 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:59:53.778 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:59:53.778 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2422324' 00:59:53.778 killing process with pid 2422324 00:59:53.778 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # kill 2422324 00:59:53.778 03:56:34 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # wait 2422324 00:59:55.678 03:56:36 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:59:55.678 03:56:36 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:59:55.678 03:56:36 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:59:55.678 03:56:36 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:59:55.678 03:56:36 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:59:55.678 03:56:36 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:55.678 03:56:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:59:55.678 03:56:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:58.209 03:56:39 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:59:58.209 00:59:58.209 real 0m23.242s 00:59:58.209 user 0m30.352s 00:59:58.209 sys 0m5.313s 00:59:58.209 03:56:39 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # xtrace_disable 00:59:58.209 03:56:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:59:58.209 ************************************ 00:59:58.209 END TEST nvmf_identify_passthru 00:59:58.209 ************************************ 00:59:58.210 03:56:39 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:59:58.210 03:56:39 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:59:58.210 03:56:39 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:59:58.210 03:56:39 -- common/autotest_common.sh@10 -- # set +x 00:59:58.210 ************************************ 00:59:58.210 START TEST nvmf_dif 00:59:58.210 ************************************ 00:59:58.210 03:56:39 nvmf_dif -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:59:58.210 * Looking for test storage... 00:59:58.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:59:58.210 03:56:39 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:59:58.210 03:56:39 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:59:58.210 03:56:39 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:59:58.210 03:56:39 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:59:58.210 03:56:39 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:58.210 03:56:39 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:58.210 03:56:39 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:58.210 03:56:39 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:59:58.210 03:56:39 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:59:58.210 03:56:39 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:59:58.210 03:56:39 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:59:58.210 03:56:39 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:59:58.210 03:56:39 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:59:58.210 03:56:39 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:59:58.210 03:56:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:59:58.210 03:56:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:59:58.210 03:56:39 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:59:58.210 03:56:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 01:00:04.833 Found 0000:86:00.0 (0x8086 - 0x159b) 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 01:00:04.833 Found 0000:86:00.1 (0x8086 - 0x159b) 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 01:00:04.833 03:56:45 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 01:00:04.834 Found net devices under 0000:86:00.0: cvl_0_0 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 01:00:04.834 Found net devices under 0000:86:00.1: cvl_0_1 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 01:00:04.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:00:04.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 01:00:04.834 01:00:04.834 --- 10.0.0.2 ping statistics --- 01:00:04.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:04.834 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:00:04.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:00:04.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 01:00:04.834 01:00:04.834 --- 10.0.0.1 ping statistics --- 01:00:04.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:00:04.834 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@422 -- # return 0 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 01:00:04.834 03:56:45 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 01:00:07.365 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 01:00:07.365 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 01:00:07.365 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 01:00:07.365 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 01:00:07.365 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 01:00:07.366 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 01:00:07.366 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 01:00:07.366 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 01:00:07.366 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 01:00:07.366 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 01:00:07.366 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 01:00:07.366 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 01:00:07.366 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 01:00:07.366 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 01:00:07.366 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 01:00:07.366 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 01:00:07.366 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 01:00:07.366 03:56:48 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:00:07.366 03:56:48 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:00:07.366 03:56:48 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:00:07.366 03:56:48 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:00:07.366 03:56:48 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:00:07.366 03:56:48 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:00:07.366 03:56:48 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 01:00:07.366 03:56:48 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 01:00:07.366 03:56:48 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:00:07.366 03:56:48 nvmf_dif -- common/autotest_common.sh@723 -- # xtrace_disable 01:00:07.366 03:56:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:00:07.366 03:56:48 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2428382 01:00:07.366 03:56:48 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2428382 01:00:07.366 03:56:48 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 01:00:07.366 03:56:48 nvmf_dif -- common/autotest_common.sh@830 -- # '[' -z 2428382 ']' 01:00:07.366 03:56:48 nvmf_dif -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 01:00:07.366 03:56:48 nvmf_dif -- common/autotest_common.sh@835 -- # local max_retries=100 01:00:07.366 03:56:48 nvmf_dif -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:00:07.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:00:07.366 03:56:48 nvmf_dif -- common/autotest_common.sh@839 -- # xtrace_disable 01:00:07.366 03:56:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:00:07.366 [2024-06-11 03:56:48.536175] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 01:00:07.366 [2024-06-11 03:56:48.536226] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:00:07.366 EAL: No free 2048 kB hugepages reported on node 1 01:00:07.366 [2024-06-11 03:56:48.597030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:00:07.366 [2024-06-11 03:56:48.638807] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:00:07.366 [2024-06-11 03:56:48.638843] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:00:07.366 [2024-06-11 03:56:48.638850] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:00:07.366 [2024-06-11 03:56:48.638857] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 01:00:07.366 [2024-06-11 03:56:48.638862] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:00:07.366 [2024-06-11 03:56:48.638883] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 01:00:07.366 03:56:48 nvmf_dif -- common/autotest_common.sh@859 -- # (( i == 0 )) 01:00:07.366 03:56:48 nvmf_dif -- common/autotest_common.sh@863 -- # return 0 01:00:07.366 03:56:48 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:00:07.366 03:56:48 nvmf_dif -- common/autotest_common.sh@729 -- # xtrace_disable 01:00:07.366 03:56:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:00:07.366 03:56:48 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:00:07.366 03:56:48 nvmf_dif -- target/dif.sh@139 -- # create_transport 01:00:07.366 03:56:48 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 01:00:07.366 03:56:48 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:07.366 03:56:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:00:07.366 [2024-06-11 03:56:48.768220] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:00:07.624 03:56:48 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:07.624 03:56:48 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 01:00:07.624 03:56:48 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 01:00:07.624 03:56:48 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 01:00:07.624 03:56:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:00:07.624 ************************************ 01:00:07.624 START TEST fio_dif_1_default 01:00:07.624 ************************************ 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # fio_dif_1 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:00:07.624 bdev_null0 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:00:07.624 [2024-06-11 03:56:48.840505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:00:07.624 { 01:00:07.624 "params": { 01:00:07.624 "name": "Nvme$subsystem", 01:00:07.624 "trtype": "$TEST_TRANSPORT", 01:00:07.624 "traddr": "$NVMF_FIRST_TARGET_IP", 01:00:07.624 "adrfam": "ipv4", 01:00:07.624 "trsvcid": "$NVMF_PORT", 01:00:07.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:00:07.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:00:07.624 "hdgst": ${hdgst:-false}, 01:00:07.624 "ddgst": ${ddgst:-false} 01:00:07.624 }, 01:00:07.624 "method": "bdev_nvme_attach_controller" 01:00:07.624 } 01:00:07.624 EOF 01:00:07.624 )") 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local sanitizers 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # shift 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local asan_lib= 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libasan 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:00:07.624 "params": { 01:00:07.624 "name": "Nvme0", 01:00:07.624 "trtype": "tcp", 01:00:07.624 "traddr": "10.0.0.2", 01:00:07.624 "adrfam": "ipv4", 01:00:07.624 "trsvcid": "4420", 01:00:07.624 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:00:07.624 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:00:07.624 "hdgst": false, 01:00:07.624 "ddgst": false 01:00:07.624 }, 01:00:07.624 "method": "bdev_nvme_attach_controller" 01:00:07.624 }' 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # awk '{print $3}' 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # asan_lib= 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 01:00:07.624 03:56:48 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:07.881 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:00:07.881 fio-3.35 01:00:07.881 Starting 1 thread 01:00:07.881 EAL: No free 2048 kB hugepages reported on node 1 01:00:20.074 01:00:20.074 filename0: (groupid=0, jobs=1): err= 0: pid=2428572: Tue Jun 11 03:56:59 2024 01:00:20.074 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10011msec) 01:00:20.074 slat (nsec): min=5982, max=43734, avg=6493.42, stdev=1941.56 01:00:20.074 clat (usec): min=40800, max=44078, avg=41345.47, stdev=504.92 01:00:20.074 lat (usec): min=40806, max=44116, avg=41351.96, stdev=505.07 01:00:20.074 clat percentiles (usec): 01:00:20.074 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 01:00:20.074 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 01:00:20.074 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 01:00:20.074 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 01:00:20.074 | 99.99th=[44303] 01:00:20.074 bw ( KiB/s): min= 383, max= 416, per=99.54%, avg=385.55, stdev= 7.17, samples=20 01:00:20.074 iops : min= 95, max= 104, avg=96.35, stdev= 1.81, samples=20 01:00:20.074 lat (msec) : 50=100.00% 01:00:20.074 cpu : usr=95.07%, sys=4.68%, ctx=20, majf=0, minf=235 01:00:20.074 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:20.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:20.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:20.074 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:20.074 latency : target=0, window=0, percentile=100.00%, depth=4 01:00:20.074 01:00:20.074 Run status group 0 (all jobs): 01:00:20.074 READ: bw=387KiB/s (396kB/s), 387KiB/s-387KiB/s (396kB/s-396kB/s), io=3872KiB (3965kB), run=10011-10011msec 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:20.074 01:00:20.074 real 0m11.118s 01:00:20.074 user 0m15.834s 01:00:20.074 sys 0m0.766s 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # xtrace_disable 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 01:00:20.074 ************************************ 01:00:20.074 END TEST fio_dif_1_default 01:00:20.074 ************************************ 01:00:20.074 03:56:59 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 01:00:20.074 03:56:59 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 01:00:20.074 03:56:59 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 01:00:20.074 03:56:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:00:20.074 ************************************ 01:00:20.074 START TEST fio_dif_1_multi_subsystems 01:00:20.074 ************************************ 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # fio_dif_1_multi_subsystems 01:00:20.074 03:56:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 01:00:20.075 03:56:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 01:00:20.075 03:56:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 01:00:20.075 03:56:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:00:20.075 03:56:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 01:00:20.075 03:56:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 01:00:20.075 03:56:59 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:00:20.075 03:56:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:20.075 03:56:59 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:00:20.075 bdev_null0 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:00:20.075 [2024-06-11 03:57:00.026436] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:00:20.075 bdev_null1 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:00:20.075 { 01:00:20.075 "params": { 01:00:20.075 "name": "Nvme$subsystem", 01:00:20.075 "trtype": "$TEST_TRANSPORT", 01:00:20.075 "traddr": "$NVMF_FIRST_TARGET_IP", 01:00:20.075 "adrfam": "ipv4", 01:00:20.075 "trsvcid": "$NVMF_PORT", 01:00:20.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:00:20.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:00:20.075 "hdgst": ${hdgst:-false}, 01:00:20.075 "ddgst": ${ddgst:-false} 01:00:20.075 }, 01:00:20.075 "method": "bdev_nvme_attach_controller" 01:00:20.075 } 01:00:20.075 EOF 01:00:20.075 )") 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local sanitizers 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # shift 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local asan_lib= 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libasan 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:00:20.075 { 01:00:20.075 "params": { 01:00:20.075 "name": "Nvme$subsystem", 01:00:20.075 "trtype": "$TEST_TRANSPORT", 01:00:20.075 "traddr": "$NVMF_FIRST_TARGET_IP", 01:00:20.075 "adrfam": "ipv4", 01:00:20.075 "trsvcid": "$NVMF_PORT", 01:00:20.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:00:20.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:00:20.075 "hdgst": ${hdgst:-false}, 01:00:20.075 "ddgst": ${ddgst:-false} 01:00:20.075 }, 01:00:20.075 "method": "bdev_nvme_attach_controller" 01:00:20.075 } 01:00:20.075 EOF 01:00:20.075 )") 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:00:20.075 "params": { 01:00:20.075 "name": "Nvme0", 01:00:20.075 "trtype": "tcp", 01:00:20.075 "traddr": "10.0.0.2", 01:00:20.075 "adrfam": "ipv4", 01:00:20.075 "trsvcid": "4420", 01:00:20.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:00:20.075 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:00:20.075 "hdgst": false, 01:00:20.075 "ddgst": false 01:00:20.075 }, 01:00:20.075 "method": "bdev_nvme_attach_controller" 01:00:20.075 },{ 01:00:20.075 "params": { 01:00:20.075 "name": "Nvme1", 01:00:20.075 "trtype": "tcp", 01:00:20.075 "traddr": "10.0.0.2", 01:00:20.075 "adrfam": "ipv4", 01:00:20.075 "trsvcid": "4420", 01:00:20.075 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:00:20.075 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:00:20.075 "hdgst": false, 01:00:20.075 "ddgst": false 01:00:20.075 }, 01:00:20.075 "method": "bdev_nvme_attach_controller" 01:00:20.075 }' 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 01:00:20.075 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # awk '{print $3}' 01:00:20.076 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # asan_lib= 01:00:20.076 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 01:00:20.076 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 01:00:20.076 03:57:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:20.076 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:00:20.076 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 01:00:20.076 fio-3.35 01:00:20.076 Starting 2 threads 01:00:20.076 EAL: No free 2048 kB hugepages reported on node 1 01:00:30.049 01:00:30.049 filename0: (groupid=0, jobs=1): err= 0: pid=2430556: Tue Jun 11 03:57:10 2024 01:00:30.049 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10026msec) 01:00:30.049 slat (nsec): min=5898, max=38420, avg=8152.39, stdev=3489.43 01:00:30.049 clat (usec): min=40785, max=42131, avg=41062.21, stdev=290.82 01:00:30.049 lat (usec): min=40792, max=42142, avg=41070.37, stdev=291.18 01:00:30.049 clat percentiles (usec): 01:00:30.049 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 01:00:30.049 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 01:00:30.049 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 01:00:30.049 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 01:00:30.049 | 99.99th=[42206] 01:00:30.049 bw ( KiB/s): min= 384, max= 416, per=49.82%, avg=388.80, stdev=11.72, samples=20 01:00:30.049 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 01:00:30.049 lat (msec) : 50=100.00% 01:00:30.049 cpu : usr=97.99%, sys=1.73%, ctx=13, majf=0, minf=185 01:00:30.049 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:30.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:30.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:30.049 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:30.049 latency : target=0, window=0, percentile=100.00%, depth=4 01:00:30.049 filename1: (groupid=0, jobs=1): err= 0: pid=2430557: Tue Jun 11 03:57:10 2024 01:00:30.049 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10024msec) 01:00:30.049 slat (nsec): min=5969, max=31857, avg=8280.84, stdev=3279.54 01:00:30.049 clat (usec): min=40823, max=42053, avg=41054.05, stdev=262.70 01:00:30.049 lat (usec): min=40830, max=42081, avg=41062.33, stdev=263.18 01:00:30.049 clat percentiles (usec): 01:00:30.049 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 01:00:30.049 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 01:00:30.049 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 01:00:30.049 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 01:00:30.049 | 99.99th=[42206] 01:00:30.049 bw ( KiB/s): min= 384, max= 416, per=49.82%, avg=388.80, stdev=11.72, samples=20 01:00:30.049 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 01:00:30.049 lat (msec) : 50=100.00% 01:00:30.049 cpu : usr=97.76%, sys=1.96%, ctx=20, majf=0, minf=35 01:00:30.049 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:30.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:30.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:30.049 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:30.049 latency : target=0, window=0, percentile=100.00%, depth=4 01:00:30.049 01:00:30.049 Run status group 0 (all jobs): 01:00:30.049 READ: bw=779KiB/s (797kB/s), 389KiB/s-389KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10024-10026msec 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:30.049 01:00:30.049 real 0m11.176s 01:00:30.049 user 0m26.093s 01:00:30.049 sys 0m0.657s 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # xtrace_disable 01:00:30.049 03:57:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 01:00:30.049 ************************************ 01:00:30.049 END TEST fio_dif_1_multi_subsystems 01:00:30.049 ************************************ 01:00:30.049 03:57:11 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 01:00:30.049 03:57:11 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 01:00:30.049 03:57:11 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 01:00:30.049 03:57:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:00:30.049 ************************************ 01:00:30.049 START TEST fio_dif_rand_params 01:00:30.049 ************************************ 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # fio_dif_rand_params 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:30.049 bdev_null0 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:30.049 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:30.050 [2024-06-11 03:57:11.270266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:00:30.050 { 01:00:30.050 "params": { 01:00:30.050 "name": "Nvme$subsystem", 01:00:30.050 "trtype": "$TEST_TRANSPORT", 01:00:30.050 "traddr": "$NVMF_FIRST_TARGET_IP", 01:00:30.050 "adrfam": "ipv4", 01:00:30.050 "trsvcid": "$NVMF_PORT", 01:00:30.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:00:30.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:00:30.050 "hdgst": ${hdgst:-false}, 01:00:30.050 "ddgst": ${ddgst:-false} 01:00:30.050 }, 01:00:30.050 "method": "bdev_nvme_attach_controller" 01:00:30.050 } 01:00:30.050 EOF 01:00:30.050 )") 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:00:30.050 "params": { 01:00:30.050 "name": "Nvme0", 01:00:30.050 "trtype": "tcp", 01:00:30.050 "traddr": "10.0.0.2", 01:00:30.050 "adrfam": "ipv4", 01:00:30.050 "trsvcid": "4420", 01:00:30.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:00:30.050 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:00:30.050 "hdgst": false, 01:00:30.050 "ddgst": false 01:00:30.050 }, 01:00:30.050 "method": "bdev_nvme_attach_controller" 01:00:30.050 }' 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 01:00:30.050 03:57:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:30.309 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:00:30.309 ... 01:00:30.309 fio-3.35 01:00:30.309 Starting 3 threads 01:00:30.309 EAL: No free 2048 kB hugepages reported on node 1 01:00:36.882 01:00:36.882 filename0: (groupid=0, jobs=1): err= 0: pid=2432972: Tue Jun 11 03:57:17 2024 01:00:36.882 read: IOPS=289, BW=36.2MiB/s (38.0MB/s)(181MiB/5006msec) 01:00:36.882 slat (nsec): min=6091, max=33883, avg=9397.68, stdev=2737.89 01:00:36.882 clat (usec): min=3590, max=87995, avg=10339.79, stdev=11872.20 01:00:36.882 lat (usec): min=3598, max=88007, avg=10349.19, stdev=11872.46 01:00:36.882 clat percentiles (usec): 01:00:36.882 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4424], 20.00th=[ 4752], 01:00:36.882 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6849], 60.00th=[ 7832], 01:00:36.882 | 70.00th=[ 8586], 80.00th=[ 9372], 90.00th=[11076], 95.00th=[47973], 01:00:36.882 | 99.00th=[50594], 99.50th=[51643], 99.90th=[87557], 99.95th=[87557], 01:00:36.882 | 99.99th=[87557] 01:00:36.882 bw ( KiB/s): min=25344, max=48384, per=36.33%, avg=37068.80, stdev=8252.05, samples=10 01:00:36.882 iops : min= 198, max= 378, avg=289.60, stdev=64.47, samples=10 01:00:36.882 lat (msec) : 4=1.17%, 10=84.90%, 20=5.86%, 50=6.55%, 100=1.52% 01:00:36.882 cpu : usr=96.12%, sys=3.54%, ctx=11, majf=0, minf=43 01:00:36.882 IO depths : 1=3.0%, 2=97.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:36.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:36.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:36.882 issued rwts: total=1450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:36.882 latency : target=0, window=0, percentile=100.00%, depth=3 01:00:36.882 filename0: (groupid=0, jobs=1): err= 0: pid=2432973: Tue Jun 11 03:57:17 2024 01:00:36.882 read: IOPS=252, BW=31.6MiB/s (33.1MB/s)(158MiB/5003msec) 01:00:36.882 slat (nsec): min=6115, max=29369, avg=10174.29, stdev=3312.62 01:00:36.882 clat (usec): min=3654, max=92065, avg=11850.83, stdev=13577.87 01:00:36.882 lat (usec): min=3660, max=92078, avg=11861.00, stdev=13578.01 01:00:36.882 clat percentiles (usec): 01:00:36.882 | 1.00th=[ 4080], 5.00th=[ 4359], 10.00th=[ 4621], 20.00th=[ 5604], 01:00:36.882 | 30.00th=[ 6325], 40.00th=[ 6915], 50.00th=[ 7701], 60.00th=[ 8717], 01:00:36.882 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[44827], 95.00th=[48497], 01:00:36.882 | 99.00th=[51119], 99.50th=[86508], 99.90th=[91751], 99.95th=[91751], 01:00:36.882 | 99.99th=[91751] 01:00:36.882 bw ( KiB/s): min=26624, max=44288, per=32.92%, avg=33592.89, stdev=6045.58, samples=9 01:00:36.882 iops : min= 208, max= 346, avg=262.44, stdev=47.23, samples=9 01:00:36.882 lat (msec) : 4=0.55%, 10=77.87%, 20=11.46%, 50=7.51%, 100=2.61% 01:00:36.882 cpu : usr=93.36%, sys=4.70%, ctx=410, majf=0, minf=119 01:00:36.882 IO depths : 1=2.1%, 2=97.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:36.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:36.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:36.882 issued rwts: total=1265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:36.882 latency : target=0, window=0, percentile=100.00%, depth=3 01:00:36.882 filename0: (groupid=0, jobs=1): err= 0: pid=2432974: Tue Jun 11 03:57:17 2024 01:00:36.882 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(163MiB/5043msec) 01:00:36.882 slat (nsec): min=6119, max=66018, avg=9561.09, stdev=3225.03 01:00:36.882 clat (usec): min=3616, max=88318, avg=11577.35, stdev=13027.80 01:00:36.882 lat (usec): min=3623, max=88329, avg=11586.91, stdev=13028.00 01:00:36.882 clat percentiles (usec): 01:00:36.882 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4883], 01:00:36.882 | 30.00th=[ 5932], 40.00th=[ 6652], 50.00th=[ 7308], 60.00th=[ 8356], 01:00:36.882 | 70.00th=[ 9241], 80.00th=[10159], 90.00th=[46400], 95.00th=[48497], 01:00:36.882 | 99.00th=[50594], 99.50th=[51643], 99.90th=[54789], 99.95th=[88605], 01:00:36.882 | 99.99th=[88605] 01:00:36.882 bw ( KiB/s): min=20736, max=48128, per=32.67%, avg=33331.20, stdev=8677.86, samples=10 01:00:36.882 iops : min= 162, max= 376, avg=260.40, stdev=67.80, samples=10 01:00:36.882 lat (msec) : 4=1.53%, 10=75.86%, 20=12.11%, 50=8.05%, 100=2.45% 01:00:36.882 cpu : usr=96.39%, sys=3.27%, ctx=18, majf=0, minf=167 01:00:36.882 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:36.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:36.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:36.882 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:36.882 latency : target=0, window=0, percentile=100.00%, depth=3 01:00:36.882 01:00:36.882 Run status group 0 (all jobs): 01:00:36.882 READ: bw=99.6MiB/s (104MB/s), 31.6MiB/s-36.2MiB/s (33.1MB/s-38.0MB/s), io=503MiB (527MB), run=5003-5043msec 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:36.882 bdev_null0 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:36.882 [2024-06-11 03:57:17.476478] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:36.882 bdev_null1 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:00:36.882 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:36.883 bdev_null2 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:00:36.883 { 01:00:36.883 "params": { 01:00:36.883 "name": "Nvme$subsystem", 01:00:36.883 "trtype": "$TEST_TRANSPORT", 01:00:36.883 "traddr": "$NVMF_FIRST_TARGET_IP", 01:00:36.883 "adrfam": "ipv4", 01:00:36.883 "trsvcid": "$NVMF_PORT", 01:00:36.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:00:36.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:00:36.883 "hdgst": ${hdgst:-false}, 01:00:36.883 "ddgst": ${ddgst:-false} 01:00:36.883 }, 01:00:36.883 "method": "bdev_nvme_attach_controller" 01:00:36.883 } 01:00:36.883 EOF 01:00:36.883 )") 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:00:36.883 { 01:00:36.883 "params": { 01:00:36.883 "name": "Nvme$subsystem", 01:00:36.883 "trtype": "$TEST_TRANSPORT", 01:00:36.883 "traddr": "$NVMF_FIRST_TARGET_IP", 01:00:36.883 "adrfam": "ipv4", 01:00:36.883 "trsvcid": "$NVMF_PORT", 01:00:36.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:00:36.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:00:36.883 "hdgst": ${hdgst:-false}, 01:00:36.883 "ddgst": ${ddgst:-false} 01:00:36.883 }, 01:00:36.883 "method": "bdev_nvme_attach_controller" 01:00:36.883 } 01:00:36.883 EOF 01:00:36.883 )") 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:00:36.883 { 01:00:36.883 "params": { 01:00:36.883 "name": "Nvme$subsystem", 01:00:36.883 "trtype": "$TEST_TRANSPORT", 01:00:36.883 "traddr": "$NVMF_FIRST_TARGET_IP", 01:00:36.883 "adrfam": "ipv4", 01:00:36.883 "trsvcid": "$NVMF_PORT", 01:00:36.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:00:36.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:00:36.883 "hdgst": ${hdgst:-false}, 01:00:36.883 "ddgst": ${ddgst:-false} 01:00:36.883 }, 01:00:36.883 "method": "bdev_nvme_attach_controller" 01:00:36.883 } 01:00:36.883 EOF 01:00:36.883 )") 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 01:00:36.883 03:57:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:00:36.883 "params": { 01:00:36.883 "name": "Nvme0", 01:00:36.883 "trtype": "tcp", 01:00:36.883 "traddr": "10.0.0.2", 01:00:36.883 "adrfam": "ipv4", 01:00:36.883 "trsvcid": "4420", 01:00:36.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:00:36.883 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:00:36.883 "hdgst": false, 01:00:36.883 "ddgst": false 01:00:36.883 }, 01:00:36.883 "method": "bdev_nvme_attach_controller" 01:00:36.883 },{ 01:00:36.883 "params": { 01:00:36.883 "name": "Nvme1", 01:00:36.883 "trtype": "tcp", 01:00:36.883 "traddr": "10.0.0.2", 01:00:36.883 "adrfam": "ipv4", 01:00:36.883 "trsvcid": "4420", 01:00:36.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:00:36.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:00:36.884 "hdgst": false, 01:00:36.884 "ddgst": false 01:00:36.884 }, 01:00:36.884 "method": "bdev_nvme_attach_controller" 01:00:36.884 },{ 01:00:36.884 "params": { 01:00:36.884 "name": "Nvme2", 01:00:36.884 "trtype": "tcp", 01:00:36.884 "traddr": "10.0.0.2", 01:00:36.884 "adrfam": "ipv4", 01:00:36.884 "trsvcid": "4420", 01:00:36.884 "subnqn": "nqn.2016-06.io.spdk:cnode2", 01:00:36.884 "hostnqn": "nqn.2016-06.io.spdk:host2", 01:00:36.884 "hdgst": false, 01:00:36.884 "ddgst": false 01:00:36.884 }, 01:00:36.884 "method": "bdev_nvme_attach_controller" 01:00:36.884 }' 01:00:36.884 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 01:00:36.884 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 01:00:36.884 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 01:00:36.884 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:36.884 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 01:00:36.884 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 01:00:36.884 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 01:00:36.884 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 01:00:36.884 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 01:00:36.884 03:57:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:36.884 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:00:36.884 ... 01:00:36.884 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:00:36.884 ... 01:00:36.884 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 01:00:36.884 ... 01:00:36.884 fio-3.35 01:00:36.884 Starting 24 threads 01:00:36.884 EAL: No free 2048 kB hugepages reported on node 1 01:00:49.133 01:00:49.133 filename0: (groupid=0, jobs=1): err= 0: pid=2434232: Tue Jun 11 03:57:29 2024 01:00:49.133 read: IOPS=537, BW=2151KiB/s (2203kB/s)(21.1MiB/10029msec) 01:00:49.133 slat (nsec): min=3460, max=45742, avg=15768.43, stdev=4635.95 01:00:49.133 clat (usec): min=6038, max=45662, avg=29613.01, stdev=2736.96 01:00:49.133 lat (usec): min=6056, max=45677, avg=29628.78, stdev=2737.51 01:00:49.133 clat percentiles (usec): 01:00:49.133 | 1.00th=[14877], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 01:00:49.133 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.133 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.133 | 99.00th=[31065], 99.50th=[31589], 99.90th=[39060], 99.95th=[39060], 01:00:49.133 | 99.99th=[45876] 01:00:49.133 bw ( KiB/s): min= 2048, max= 2464, per=4.23%, avg=2151.20, stdev=116.83, samples=20 01:00:49.133 iops : min= 512, max= 616, avg=537.80, stdev=29.21, samples=20 01:00:49.133 lat (msec) : 10=0.67%, 20=1.89%, 50=97.44% 01:00:49.133 cpu : usr=98.74%, sys=0.89%, ctx=19, majf=0, minf=61 01:00:49.133 IO depths : 1=5.7%, 2=11.7%, 4=24.3%, 8=51.4%, 16=6.8%, 32=0.0%, >=64=0.0% 01:00:49.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.133 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.133 issued rwts: total=5394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.133 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.133 filename0: (groupid=0, jobs=1): err= 0: pid=2434233: Tue Jun 11 03:57:29 2024 01:00:49.133 read: IOPS=533, BW=2134KiB/s (2185kB/s)(20.9MiB/10018msec) 01:00:49.133 slat (nsec): min=4679, max=42611, avg=10886.10, stdev=3226.51 01:00:49.133 clat (usec): min=5993, max=31846, avg=29897.17, stdev=2184.62 01:00:49.133 lat (usec): min=6013, max=31861, avg=29908.06, stdev=2184.03 01:00:49.133 clat percentiles (usec): 01:00:49.133 | 1.00th=[16909], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 01:00:49.133 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.133 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.133 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31851], 99.95th=[31851], 01:00:49.133 | 99.99th=[31851] 01:00:49.133 bw ( KiB/s): min= 2048, max= 2436, per=4.19%, avg=2131.40, stdev=96.05, samples=20 01:00:49.133 iops : min= 512, max= 609, avg=532.85, stdev=24.01, samples=20 01:00:49.133 lat (msec) : 10=0.56%, 20=0.64%, 50=98.80% 01:00:49.133 cpu : usr=98.58%, sys=1.06%, ctx=13, majf=0, minf=62 01:00:49.133 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 01:00:49.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.133 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.134 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.134 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.134 filename0: (groupid=0, jobs=1): err= 0: pid=2434235: Tue Jun 11 03:57:29 2024 01:00:49.134 read: IOPS=527, BW=2112KiB/s (2162kB/s)(20.6MiB/10002msec) 01:00:49.134 slat (nsec): min=12141, max=94892, avg=58368.37, stdev=5739.18 01:00:49.134 clat (usec): min=15056, max=66251, avg=29791.94, stdev=2203.62 01:00:49.134 lat (usec): min=15099, max=66266, avg=29850.31, stdev=2202.25 01:00:49.134 clat percentiles (usec): 01:00:49.134 | 1.00th=[28705], 5.00th=[28967], 10.00th=[29230], 20.00th=[29230], 01:00:49.134 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 01:00:49.134 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 01:00:49.134 | 99.00th=[31065], 99.50th=[31327], 99.90th=[66323], 99.95th=[66323], 01:00:49.134 | 99.99th=[66323] 01:00:49.134 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2101.89, stdev=77.69, samples=19 01:00:49.134 iops : min= 480, max= 544, avg=525.47, stdev=19.42, samples=19 01:00:49.134 lat (msec) : 20=0.30%, 50=99.39%, 100=0.30% 01:00:49.134 cpu : usr=98.80%, sys=0.79%, ctx=13, majf=0, minf=44 01:00:49.134 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 01:00:49.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.134 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.134 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.134 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.134 filename0: (groupid=0, jobs=1): err= 0: pid=2434236: Tue Jun 11 03:57:29 2024 01:00:49.134 read: IOPS=528, BW=2115KiB/s (2166kB/s)(20.7MiB/10016msec) 01:00:49.134 slat (nsec): min=8423, max=58921, avg=25344.45, stdev=7907.42 01:00:49.134 clat (usec): min=23463, max=41265, avg=30019.08, stdev=824.79 01:00:49.134 lat (usec): min=23477, max=41278, avg=30044.43, stdev=824.95 01:00:49.134 clat percentiles (usec): 01:00:49.134 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 01:00:49.134 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 01:00:49.134 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.134 | 99.00th=[31327], 99.50th=[31589], 99.90th=[41157], 99.95th=[41157], 01:00:49.134 | 99.99th=[41157] 01:00:49.134 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2112.00, stdev=65.66, samples=20 01:00:49.134 iops : min= 512, max= 544, avg=528.00, stdev=16.42, samples=20 01:00:49.134 lat (msec) : 50=100.00% 01:00:49.134 cpu : usr=98.16%, sys=1.47%, ctx=14, majf=0, minf=48 01:00:49.134 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 01:00:49.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.134 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.134 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.134 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.134 filename0: (groupid=0, jobs=1): err= 0: pid=2434237: Tue Jun 11 03:57:29 2024 01:00:49.134 read: IOPS=533, BW=2133KiB/s (2185kB/s)(20.9MiB/10020msec) 01:00:49.134 slat (nsec): min=7498, max=35226, avg=16868.35, stdev=3819.49 01:00:49.134 clat (usec): min=6376, max=31806, avg=29850.81, stdev=2174.20 01:00:49.134 lat (usec): min=6384, max=31829, avg=29867.67, stdev=2174.47 01:00:49.134 clat percentiles (usec): 01:00:49.134 | 1.00th=[17695], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 01:00:49.134 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.134 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.134 | 99.00th=[31327], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 01:00:49.134 | 99.99th=[31851] 01:00:49.134 bw ( KiB/s): min= 2048, max= 2432, per=4.19%, avg=2131.20, stdev=95.38, samples=20 01:00:49.134 iops : min= 512, max= 608, avg=532.80, stdev=23.85, samples=20 01:00:49.134 lat (msec) : 10=0.60%, 20=0.60%, 50=98.80% 01:00:49.134 cpu : usr=98.63%, sys=1.02%, ctx=18, majf=0, minf=62 01:00:49.134 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 01:00:49.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.134 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.134 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.134 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.134 filename0: (groupid=0, jobs=1): err= 0: pid=2434238: Tue Jun 11 03:57:29 2024 01:00:49.134 read: IOPS=528, BW=2115KiB/s (2166kB/s)(20.7MiB/10016msec) 01:00:49.134 slat (nsec): min=8541, max=51093, avg=25625.26, stdev=7695.01 01:00:49.134 clat (usec): min=20340, max=41151, avg=30039.61, stdev=1068.63 01:00:49.134 lat (usec): min=20349, max=41166, avg=30065.23, stdev=1068.61 01:00:49.134 clat percentiles (usec): 01:00:49.134 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 01:00:49.134 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.134 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.134 | 99.00th=[31327], 99.50th=[38536], 99.90th=[41157], 99.95th=[41157], 01:00:49.134 | 99.99th=[41157] 01:00:49.134 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2112.00, stdev=64.21, samples=20 01:00:49.134 iops : min= 512, max= 544, avg=528.00, stdev=16.05, samples=20 01:00:49.134 lat (msec) : 50=100.00% 01:00:49.134 cpu : usr=98.55%, sys=1.08%, ctx=13, majf=0, minf=60 01:00:49.134 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 01:00:49.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.134 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.134 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.134 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.134 filename0: (groupid=0, jobs=1): err= 0: pid=2434239: Tue Jun 11 03:57:29 2024 01:00:49.134 read: IOPS=529, BW=2117KiB/s (2167kB/s)(20.7MiB/10012msec) 01:00:49.134 slat (nsec): min=7528, max=45040, avg=18265.21, stdev=5674.38 01:00:49.134 clat (usec): min=19498, max=54863, avg=30080.91, stdev=1530.95 01:00:49.134 lat (usec): min=19506, max=54885, avg=30099.17, stdev=1531.15 01:00:49.134 clat percentiles (usec): 01:00:49.134 | 1.00th=[24511], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 01:00:49.134 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.134 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.134 | 99.00th=[35390], 99.50th=[40109], 99.90th=[47973], 99.95th=[47973], 01:00:49.134 | 99.99th=[54789] 01:00:49.134 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2112.80, stdev=64.94, samples=20 01:00:49.134 iops : min= 512, max= 544, avg=528.20, stdev=16.23, samples=20 01:00:49.134 lat (msec) : 20=0.19%, 50=99.77%, 100=0.04% 01:00:49.134 cpu : usr=98.61%, sys=1.02%, ctx=14, majf=0, minf=63 01:00:49.134 IO depths : 1=5.6%, 2=11.6%, 4=24.4%, 8=51.5%, 16=6.9%, 32=0.0%, >=64=0.0% 01:00:49.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.134 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.134 issued rwts: total=5298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.134 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.134 filename0: (groupid=0, jobs=1): err= 0: pid=2434240: Tue Jun 11 03:57:29 2024 01:00:49.134 read: IOPS=528, BW=2116KiB/s (2167kB/s)(20.7MiB/10012msec) 01:00:49.134 slat (nsec): min=7622, max=61997, avg=19390.54, stdev=5250.43 01:00:49.134 clat (usec): min=15655, max=51249, avg=30068.72, stdev=1291.61 01:00:49.134 lat (usec): min=15663, max=51273, avg=30088.11, stdev=1291.55 01:00:49.134 clat percentiles (usec): 01:00:49.134 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 01:00:49.134 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.134 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.134 | 99.00th=[31327], 99.50th=[31589], 99.90th=[45351], 99.95th=[51119], 01:00:49.134 | 99.99th=[51119] 01:00:49.134 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2108.63, stdev=78.31, samples=19 01:00:49.134 iops : min= 480, max= 544, avg=527.16, stdev=19.58, samples=19 01:00:49.134 lat (msec) : 20=0.30%, 50=99.62%, 100=0.08% 01:00:49.134 cpu : usr=98.62%, sys=1.01%, ctx=8, majf=0, minf=55 01:00:49.134 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 01:00:49.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.134 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.134 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.134 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.134 filename1: (groupid=0, jobs=1): err= 0: pid=2434241: Tue Jun 11 03:57:29 2024 01:00:49.134 read: IOPS=527, BW=2110KiB/s (2160kB/s)(20.6MiB/10011msec) 01:00:49.134 slat (nsec): min=7623, max=37745, avg=10227.22, stdev=2517.15 01:00:49.134 clat (usec): min=24670, max=66710, avg=30243.50, stdev=1822.05 01:00:49.134 lat (usec): min=24679, max=66747, avg=30253.73, stdev=1822.55 01:00:49.134 clat percentiles (usec): 01:00:49.134 | 1.00th=[29230], 5.00th=[29754], 10.00th=[29754], 20.00th=[30016], 01:00:49.134 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.134 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 01:00:49.134 | 99.00th=[31327], 99.50th=[31851], 99.90th=[61080], 99.95th=[66323], 01:00:49.134 | 99.99th=[66847] 01:00:49.134 bw ( KiB/s): min= 1920, max= 2176, per=4.14%, avg=2105.60, stdev=77.42, samples=20 01:00:49.134 iops : min= 480, max= 544, avg=526.40, stdev=19.35, samples=20 01:00:49.134 lat (msec) : 50=99.70%, 100=0.30% 01:00:49.134 cpu : usr=98.74%, sys=0.90%, ctx=8, majf=0, minf=71 01:00:49.134 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 01:00:49.134 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.134 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.134 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.134 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.134 filename1: (groupid=0, jobs=1): err= 0: pid=2434242: Tue Jun 11 03:57:29 2024 01:00:49.134 read: IOPS=528, BW=2115KiB/s (2166kB/s)(20.7MiB/10016msec) 01:00:49.134 slat (nsec): min=7626, max=54724, avg=25688.01, stdev=7776.72 01:00:49.134 clat (usec): min=20608, max=54246, avg=30039.73, stdev=1765.11 01:00:49.134 lat (usec): min=20622, max=54262, avg=30065.42, stdev=1765.59 01:00:49.134 clat percentiles (usec): 01:00:49.134 | 1.00th=[21890], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 01:00:49.134 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.134 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.134 | 99.00th=[38536], 99.50th=[39584], 99.90th=[46924], 99.95th=[46924], 01:00:49.134 | 99.99th=[54264] 01:00:49.134 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2112.00, stdev=65.66, samples=20 01:00:49.135 iops : min= 512, max= 544, avg=528.00, stdev=16.42, samples=20 01:00:49.135 lat (msec) : 50=99.96%, 100=0.04% 01:00:49.135 cpu : usr=98.39%, sys=1.22%, ctx=12, majf=0, minf=50 01:00:49.135 IO depths : 1=5.2%, 2=11.4%, 4=24.8%, 8=51.2%, 16=7.3%, 32=0.0%, >=64=0.0% 01:00:49.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.135 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.135 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.135 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.135 filename1: (groupid=0, jobs=1): err= 0: pid=2434243: Tue Jun 11 03:57:29 2024 01:00:49.135 read: IOPS=533, BW=2135KiB/s (2187kB/s)(20.9MiB/10010msec) 01:00:49.135 slat (nsec): min=7493, max=56276, avg=21729.62, stdev=8191.76 01:00:49.135 clat (usec): min=10159, max=37662, avg=29799.08, stdev=1905.79 01:00:49.135 lat (usec): min=10167, max=37678, avg=29820.81, stdev=1907.09 01:00:49.135 clat percentiles (usec): 01:00:49.135 | 1.00th=[18482], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 01:00:49.135 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.135 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.135 | 99.00th=[31327], 99.50th=[31589], 99.90th=[36963], 99.95th=[37487], 01:00:49.135 | 99.99th=[37487] 01:00:49.135 bw ( KiB/s): min= 2048, max= 2432, per=4.19%, avg=2131.20, stdev=95.38, samples=20 01:00:49.135 iops : min= 512, max= 608, avg=532.80, stdev=23.85, samples=20 01:00:49.135 lat (msec) : 20=1.46%, 50=98.54% 01:00:49.135 cpu : usr=97.98%, sys=1.66%, ctx=35, majf=0, minf=51 01:00:49.135 IO depths : 1=5.8%, 2=11.9%, 4=24.6%, 8=51.0%, 16=6.7%, 32=0.0%, >=64=0.0% 01:00:49.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.135 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.135 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.135 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.135 filename1: (groupid=0, jobs=1): err= 0: pid=2434245: Tue Jun 11 03:57:29 2024 01:00:49.135 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10016msec) 01:00:49.135 slat (nsec): min=7487, max=51600, avg=25376.50, stdev=7880.11 01:00:49.135 clat (usec): min=17061, max=47645, avg=29996.73, stdev=1757.86 01:00:49.135 lat (usec): min=17069, max=47665, avg=30022.11, stdev=1758.68 01:00:49.135 clat percentiles (usec): 01:00:49.135 | 1.00th=[22414], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 01:00:49.135 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.135 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.135 | 99.00th=[37487], 99.50th=[41157], 99.90th=[45351], 99.95th=[46400], 01:00:49.135 | 99.99th=[47449] 01:00:49.135 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2114.40, stdev=62.57, samples=20 01:00:49.135 iops : min= 512, max= 544, avg=528.60, stdev=15.64, samples=20 01:00:49.135 lat (msec) : 20=0.45%, 50=99.55% 01:00:49.135 cpu : usr=98.67%, sys=0.96%, ctx=15, majf=0, minf=46 01:00:49.135 IO depths : 1=5.4%, 2=11.5%, 4=24.6%, 8=51.3%, 16=7.1%, 32=0.0%, >=64=0.0% 01:00:49.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.135 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.135 issued rwts: total=5302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.135 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.135 filename1: (groupid=0, jobs=1): err= 0: pid=2434246: Tue Jun 11 03:57:29 2024 01:00:49.135 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.7MiB/10008msec) 01:00:49.135 slat (nsec): min=6339, max=52920, avg=23674.15, stdev=8047.33 01:00:49.135 clat (usec): min=13116, max=45442, avg=30003.25, stdev=1347.72 01:00:49.135 lat (usec): min=13124, max=45459, avg=30026.92, stdev=1347.88 01:00:49.135 clat percentiles (usec): 01:00:49.135 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 01:00:49.135 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.135 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.135 | 99.00th=[31327], 99.50th=[31589], 99.90th=[45351], 99.95th=[45351], 01:00:49.135 | 99.99th=[45351] 01:00:49.135 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2111.55, stdev=65.23, samples=20 01:00:49.135 iops : min= 512, max= 544, avg=527.85, stdev=16.27, samples=20 01:00:49.135 lat (msec) : 20=0.30%, 50=99.70% 01:00:49.135 cpu : usr=98.55%, sys=1.09%, ctx=15, majf=0, minf=46 01:00:49.135 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 01:00:49.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.135 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.135 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.135 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.135 filename1: (groupid=0, jobs=1): err= 0: pid=2434247: Tue Jun 11 03:57:29 2024 01:00:49.135 read: IOPS=529, BW=2116KiB/s (2167kB/s)(20.7MiB/10011msec) 01:00:49.135 slat (nsec): min=7940, max=54402, avg=19356.48, stdev=5154.59 01:00:49.135 clat (usec): min=15576, max=44540, avg=30070.43, stdev=1235.44 01:00:49.135 lat (usec): min=15589, max=44565, avg=30089.78, stdev=1235.50 01:00:49.135 clat percentiles (usec): 01:00:49.135 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 01:00:49.135 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.135 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.135 | 99.00th=[31589], 99.50th=[31851], 99.90th=[44303], 99.95th=[44303], 01:00:49.135 | 99.99th=[44303] 01:00:49.135 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2108.79, stdev=77.91, samples=19 01:00:49.135 iops : min= 480, max= 544, avg=527.16, stdev=19.58, samples=19 01:00:49.135 lat (msec) : 20=0.30%, 50=99.70% 01:00:49.135 cpu : usr=98.73%, sys=0.90%, ctx=13, majf=0, minf=65 01:00:49.135 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 01:00:49.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.135 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.135 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.135 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.135 filename1: (groupid=0, jobs=1): err= 0: pid=2434248: Tue Jun 11 03:57:29 2024 01:00:49.135 read: IOPS=528, BW=2115KiB/s (2166kB/s)(20.7MiB/10016msec) 01:00:49.135 slat (nsec): min=12403, max=53396, avg=26088.73, stdev=7901.53 01:00:49.135 clat (usec): min=23539, max=41015, avg=30037.29, stdev=810.74 01:00:49.135 lat (usec): min=23557, max=41037, avg=30063.38, stdev=810.45 01:00:49.135 clat percentiles (usec): 01:00:49.135 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 01:00:49.135 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.135 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.135 | 99.00th=[31327], 99.50th=[31851], 99.90th=[41157], 99.95th=[41157], 01:00:49.135 | 99.99th=[41157] 01:00:49.135 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2112.00, stdev=65.66, samples=20 01:00:49.135 iops : min= 512, max= 544, avg=528.00, stdev=16.42, samples=20 01:00:49.135 lat (msec) : 50=100.00% 01:00:49.135 cpu : usr=98.54%, sys=1.09%, ctx=5, majf=0, minf=48 01:00:49.135 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 01:00:49.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.135 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.135 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.135 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.135 filename1: (groupid=0, jobs=1): err= 0: pid=2434249: Tue Jun 11 03:57:29 2024 01:00:49.135 read: IOPS=529, BW=2116KiB/s (2167kB/s)(20.7MiB/10009msec) 01:00:49.135 slat (nsec): min=6496, max=51636, avg=24726.21, stdev=7561.37 01:00:49.135 clat (usec): min=13171, max=47015, avg=30006.92, stdev=1518.37 01:00:49.135 lat (usec): min=13191, max=47032, avg=30031.65, stdev=1518.52 01:00:49.135 clat percentiles (usec): 01:00:49.135 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 01:00:49.135 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.135 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.135 | 99.00th=[31327], 99.50th=[31851], 99.90th=[46924], 99.95th=[46924], 01:00:49.135 | 99.99th=[46924] 01:00:49.135 bw ( KiB/s): min= 1920, max= 2192, per=4.15%, avg=2111.10, stdev=77.19, samples=20 01:00:49.135 iops : min= 480, max= 548, avg=527.75, stdev=19.28, samples=20 01:00:49.135 lat (msec) : 20=0.30%, 50=99.70% 01:00:49.135 cpu : usr=98.76%, sys=0.88%, ctx=12, majf=0, minf=65 01:00:49.135 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 01:00:49.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.135 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.135 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.135 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.135 filename2: (groupid=0, jobs=1): err= 0: pid=2434250: Tue Jun 11 03:57:29 2024 01:00:49.135 read: IOPS=536, BW=2145KiB/s (2196kB/s)(21.0MiB/10005msec) 01:00:49.135 slat (nsec): min=6442, max=51670, avg=14896.42, stdev=8511.77 01:00:49.135 clat (usec): min=10921, max=78176, avg=29778.04, stdev=4132.98 01:00:49.135 lat (usec): min=10929, max=78193, avg=29792.94, stdev=4132.05 01:00:49.135 clat percentiles (usec): 01:00:49.135 | 1.00th=[21103], 5.00th=[22414], 10.00th=[25297], 20.00th=[27657], 01:00:49.135 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.135 | 70.00th=[30278], 80.00th=[30540], 90.00th=[33817], 95.00th=[36963], 01:00:49.135 | 99.00th=[39060], 99.50th=[40633], 99.90th=[66323], 99.95th=[66323], 01:00:49.135 | 99.99th=[78119] 01:00:49.135 bw ( KiB/s): min= 1907, max= 2256, per=4.21%, avg=2141.75, stdev=66.78, samples=20 01:00:49.135 iops : min= 476, max= 564, avg=535.40, stdev=16.83, samples=20 01:00:49.135 lat (msec) : 20=0.86%, 50=98.84%, 100=0.30% 01:00:49.135 cpu : usr=98.63%, sys=0.99%, ctx=14, majf=0, minf=60 01:00:49.135 IO depths : 1=0.1%, 2=0.4%, 4=3.3%, 8=80.0%, 16=16.3%, 32=0.0%, >=64=0.0% 01:00:49.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.135 complete : 0=0.0%, 4=89.2%, 8=8.9%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.135 issued rwts: total=5364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.135 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.135 filename2: (groupid=0, jobs=1): err= 0: pid=2434251: Tue Jun 11 03:57:29 2024 01:00:49.135 read: IOPS=527, BW=2110KiB/s (2161kB/s)(20.6MiB/10021msec) 01:00:49.135 slat (nsec): min=7525, max=36901, avg=16589.31, stdev=4263.27 01:00:49.135 clat (usec): min=15070, max=64965, avg=30140.45, stdev=1378.16 01:00:49.135 lat (usec): min=15080, max=64989, avg=30157.04, stdev=1378.36 01:00:49.136 clat percentiles (usec): 01:00:49.136 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 01:00:49.136 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.136 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.136 | 99.00th=[31327], 99.50th=[31851], 99.90th=[46924], 99.95th=[49021], 01:00:49.136 | 99.99th=[64750] 01:00:49.136 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2112.00, stdev=76.47, samples=20 01:00:49.136 iops : min= 480, max= 544, avg=528.00, stdev=19.12, samples=20 01:00:49.136 lat (msec) : 20=0.11%, 50=99.85%, 100=0.04% 01:00:49.136 cpu : usr=98.65%, sys=1.00%, ctx=11, majf=0, minf=68 01:00:49.136 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 01:00:49.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.136 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.136 issued rwts: total=5286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.136 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.136 filename2: (groupid=0, jobs=1): err= 0: pid=2434252: Tue Jun 11 03:57:29 2024 01:00:49.136 read: IOPS=528, BW=2114KiB/s (2164kB/s)(20.7MiB/10022msec) 01:00:49.136 slat (nsec): min=8764, max=54254, avg=23707.51, stdev=8051.32 01:00:49.136 clat (usec): min=23610, max=41104, avg=30072.75, stdev=804.09 01:00:49.136 lat (usec): min=23623, max=41135, avg=30096.45, stdev=803.58 01:00:49.136 clat percentiles (usec): 01:00:49.136 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 01:00:49.136 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.136 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.136 | 99.00th=[31327], 99.50th=[31851], 99.90th=[41157], 99.95th=[41157], 01:00:49.136 | 99.99th=[41157] 01:00:49.136 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2112.00, stdev=65.66, samples=20 01:00:49.136 iops : min= 512, max= 544, avg=528.00, stdev=16.42, samples=20 01:00:49.136 lat (msec) : 50=100.00% 01:00:49.136 cpu : usr=98.73%, sys=0.91%, ctx=7, majf=0, minf=60 01:00:49.136 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 01:00:49.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.136 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.136 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.136 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.136 filename2: (groupid=0, jobs=1): err= 0: pid=2434253: Tue Jun 11 03:57:29 2024 01:00:49.136 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.7MiB/10005msec) 01:00:49.136 slat (nsec): min=6176, max=53113, avg=18292.72, stdev=7736.02 01:00:49.136 clat (usec): min=8178, max=52615, avg=30005.77, stdev=3314.45 01:00:49.136 lat (usec): min=8186, max=52632, avg=30024.06, stdev=3315.03 01:00:49.136 clat percentiles (usec): 01:00:49.136 | 1.00th=[18482], 5.00th=[25297], 10.00th=[29492], 20.00th=[29754], 01:00:49.136 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.136 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30802], 95.00th=[31851], 01:00:49.136 | 99.00th=[44827], 99.50th=[47973], 99.90th=[52691], 99.95th=[52691], 01:00:49.136 | 99.99th=[52691] 01:00:49.136 bw ( KiB/s): min= 1987, max= 2224, per=4.16%, avg=2117.75, stdev=60.48, samples=20 01:00:49.136 iops : min= 496, max= 556, avg=529.40, stdev=15.21, samples=20 01:00:49.136 lat (msec) : 10=0.30%, 20=1.21%, 50=98.19%, 100=0.30% 01:00:49.136 cpu : usr=98.70%, sys=0.91%, ctx=14, majf=0, minf=105 01:00:49.136 IO depths : 1=2.6%, 2=6.9%, 4=18.4%, 8=61.0%, 16=11.1%, 32=0.0%, >=64=0.0% 01:00:49.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.136 complete : 0=0.0%, 4=92.8%, 8=2.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.136 issued rwts: total=5310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.136 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.136 filename2: (groupid=0, jobs=1): err= 0: pid=2434254: Tue Jun 11 03:57:29 2024 01:00:49.136 read: IOPS=527, BW=2112KiB/s (2162kB/s)(20.6MiB/10002msec) 01:00:49.136 slat (nsec): min=8078, max=51421, avg=18532.78, stdev=5539.46 01:00:49.136 clat (usec): min=23365, max=51978, avg=30164.89, stdev=1629.81 01:00:49.136 lat (usec): min=23388, max=52001, avg=30183.42, stdev=1629.42 01:00:49.136 clat percentiles (usec): 01:00:49.136 | 1.00th=[24773], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 01:00:49.136 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.136 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[31065], 01:00:49.136 | 99.00th=[35914], 99.50th=[35914], 99.90th=[52167], 99.95th=[52167], 01:00:49.136 | 99.99th=[52167] 01:00:49.136 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2108.63, stdev=75.72, samples=19 01:00:49.136 iops : min= 480, max= 544, avg=527.16, stdev=18.93, samples=19 01:00:49.136 lat (msec) : 50=99.70%, 100=0.30% 01:00:49.136 cpu : usr=98.68%, sys=0.97%, ctx=16, majf=0, minf=63 01:00:49.136 IO depths : 1=3.0%, 2=9.1%, 4=24.3%, 8=54.2%, 16=9.5%, 32=0.0%, >=64=0.0% 01:00:49.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.136 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.136 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.136 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.136 filename2: (groupid=0, jobs=1): err= 0: pid=2434256: Tue Jun 11 03:57:29 2024 01:00:49.136 read: IOPS=533, BW=2134KiB/s (2185kB/s)(20.9MiB/10017msec) 01:00:49.136 slat (nsec): min=4703, max=56460, avg=19748.25, stdev=7600.25 01:00:49.136 clat (usec): min=5932, max=35371, avg=29834.73, stdev=2219.44 01:00:49.136 lat (usec): min=5948, max=35403, avg=29854.48, stdev=2219.50 01:00:49.136 clat percentiles (usec): 01:00:49.136 | 1.00th=[16909], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 01:00:49.136 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.136 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 01:00:49.136 | 99.00th=[31327], 99.50th=[31327], 99.90th=[32113], 99.95th=[32113], 01:00:49.136 | 99.99th=[35390] 01:00:49.136 bw ( KiB/s): min= 2048, max= 2432, per=4.19%, avg=2131.20, stdev=95.38, samples=20 01:00:49.136 iops : min= 512, max= 608, avg=532.80, stdev=23.85, samples=20 01:00:49.136 lat (msec) : 10=0.56%, 20=0.67%, 50=98.76% 01:00:49.136 cpu : usr=98.52%, sys=1.10%, ctx=15, majf=0, minf=73 01:00:49.136 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 01:00:49.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.136 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.136 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.136 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.136 filename2: (groupid=0, jobs=1): err= 0: pid=2434257: Tue Jun 11 03:57:29 2024 01:00:49.136 read: IOPS=535, BW=2143KiB/s (2194kB/s)(20.9MiB/10006msec) 01:00:49.136 slat (nsec): min=6763, max=53399, avg=14629.09, stdev=8240.72 01:00:49.136 clat (usec): min=10510, max=53342, avg=29801.17, stdev=3745.39 01:00:49.136 lat (usec): min=10518, max=53359, avg=29815.80, stdev=3744.73 01:00:49.136 clat percentiles (usec): 01:00:49.136 | 1.00th=[19530], 5.00th=[22414], 10.00th=[25822], 20.00th=[27919], 01:00:49.136 | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 01:00:49.136 | 70.00th=[30278], 80.00th=[30540], 90.00th=[33424], 95.00th=[35390], 01:00:49.136 | 99.00th=[38536], 99.50th=[40109], 99.90th=[53216], 99.95th=[53216], 01:00:49.136 | 99.99th=[53216] 01:00:49.136 bw ( KiB/s): min= 1920, max= 2208, per=4.21%, avg=2140.00, stdev=67.06, samples=20 01:00:49.136 iops : min= 480, max= 552, avg=535.00, stdev=16.76, samples=20 01:00:49.136 lat (msec) : 20=1.08%, 50=98.62%, 100=0.30% 01:00:49.136 cpu : usr=98.83%, sys=0.80%, ctx=14, majf=0, minf=74 01:00:49.136 IO depths : 1=0.4%, 2=1.1%, 4=4.7%, 8=78.1%, 16=15.7%, 32=0.0%, >=64=0.0% 01:00:49.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.136 complete : 0=0.0%, 4=89.5%, 8=8.4%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.136 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.136 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.136 filename2: (groupid=0, jobs=1): err= 0: pid=2434258: Tue Jun 11 03:57:29 2024 01:00:49.136 read: IOPS=527, BW=2111KiB/s (2162kB/s)(20.6MiB/10004msec) 01:00:49.136 slat (nsec): min=7969, max=79011, avg=59005.06, stdev=6019.38 01:00:49.136 clat (usec): min=12192, max=71979, avg=29795.97, stdev=2496.00 01:00:49.136 lat (usec): min=12228, max=72002, avg=29854.98, stdev=2494.30 01:00:49.136 clat percentiles (usec): 01:00:49.136 | 1.00th=[28443], 5.00th=[28967], 10.00th=[29230], 20.00th=[29230], 01:00:49.136 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 01:00:49.136 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30540], 01:00:49.136 | 99.00th=[31589], 99.50th=[37487], 99.90th=[66323], 99.95th=[66323], 01:00:49.136 | 99.99th=[71828] 01:00:49.136 bw ( KiB/s): min= 1920, max= 2176, per=4.13%, avg=2101.89, stdev=77.69, samples=19 01:00:49.136 iops : min= 480, max= 544, avg=525.47, stdev=19.42, samples=19 01:00:49.136 lat (msec) : 20=0.34%, 50=99.36%, 100=0.30% 01:00:49.136 cpu : usr=98.96%, sys=0.64%, ctx=12, majf=0, minf=42 01:00:49.136 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 01:00:49.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.136 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:49.136 issued rwts: total=5280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:49.136 latency : target=0, window=0, percentile=100.00%, depth=16 01:00:49.136 01:00:49.136 Run status group 0 (all jobs): 01:00:49.136 READ: bw=49.7MiB/s (52.1MB/s), 2110KiB/s-2151KiB/s (2160kB/s-2203kB/s), io=498MiB (522MB), run=10002-10029msec 01:00:49.136 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 01:00:49.136 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:00:49.136 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:00:49.136 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:00:49.136 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:00:49.136 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:00:49.136 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:49.136 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:49.136 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:49.136 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:00:49.136 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:49.136 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:49.136 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:49.137 bdev_null0 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:49.137 [2024-06-11 03:57:29.263803] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:49.137 bdev_null1 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:00:49.137 { 01:00:49.137 "params": { 01:00:49.137 "name": "Nvme$subsystem", 01:00:49.137 "trtype": "$TEST_TRANSPORT", 01:00:49.137 "traddr": "$NVMF_FIRST_TARGET_IP", 01:00:49.137 "adrfam": "ipv4", 01:00:49.137 "trsvcid": "$NVMF_PORT", 01:00:49.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:00:49.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:00:49.137 "hdgst": ${hdgst:-false}, 01:00:49.137 "ddgst": ${ddgst:-false} 01:00:49.137 }, 01:00:49.137 "method": "bdev_nvme_attach_controller" 01:00:49.137 } 01:00:49.137 EOF 01:00:49.137 )") 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local sanitizers 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # shift 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local asan_lib= 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libasan 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:00:49.137 { 01:00:49.137 "params": { 01:00:49.137 "name": "Nvme$subsystem", 01:00:49.137 "trtype": "$TEST_TRANSPORT", 01:00:49.137 "traddr": "$NVMF_FIRST_TARGET_IP", 01:00:49.137 "adrfam": "ipv4", 01:00:49.137 "trsvcid": "$NVMF_PORT", 01:00:49.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:00:49.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:00:49.137 "hdgst": ${hdgst:-false}, 01:00:49.137 "ddgst": ${ddgst:-false} 01:00:49.137 }, 01:00:49.137 "method": "bdev_nvme_attach_controller" 01:00:49.137 } 01:00:49.137 EOF 01:00:49.137 )") 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 01:00:49.137 03:57:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:00:49.137 "params": { 01:00:49.137 "name": "Nvme0", 01:00:49.137 "trtype": "tcp", 01:00:49.137 "traddr": "10.0.0.2", 01:00:49.137 "adrfam": "ipv4", 01:00:49.138 "trsvcid": "4420", 01:00:49.138 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:00:49.138 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:00:49.138 "hdgst": false, 01:00:49.138 "ddgst": false 01:00:49.138 }, 01:00:49.138 "method": "bdev_nvme_attach_controller" 01:00:49.138 },{ 01:00:49.138 "params": { 01:00:49.138 "name": "Nvme1", 01:00:49.138 "trtype": "tcp", 01:00:49.138 "traddr": "10.0.0.2", 01:00:49.138 "adrfam": "ipv4", 01:00:49.138 "trsvcid": "4420", 01:00:49.138 "subnqn": "nqn.2016-06.io.spdk:cnode1", 01:00:49.138 "hostnqn": "nqn.2016-06.io.spdk:host1", 01:00:49.138 "hdgst": false, 01:00:49.138 "ddgst": false 01:00:49.138 }, 01:00:49.138 "method": "bdev_nvme_attach_controller" 01:00:49.138 }' 01:00:49.138 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 01:00:49.138 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 01:00:49.138 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 01:00:49.138 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:49.138 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 01:00:49.138 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # awk '{print $3}' 01:00:49.138 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # asan_lib= 01:00:49.138 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 01:00:49.138 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 01:00:49.138 03:57:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:49.138 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:00:49.138 ... 01:00:49.138 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 01:00:49.138 ... 01:00:49.138 fio-3.35 01:00:49.138 Starting 4 threads 01:00:49.138 EAL: No free 2048 kB hugepages reported on node 1 01:00:54.429 01:00:54.429 filename0: (groupid=0, jobs=1): err= 0: pid=2435997: Tue Jun 11 03:57:35 2024 01:00:54.429 read: IOPS=2737, BW=21.4MiB/s (22.4MB/s)(107MiB/5003msec) 01:00:54.429 slat (nsec): min=6054, max=29739, avg=9092.00, stdev=3038.35 01:00:54.429 clat (usec): min=849, max=42749, avg=2895.20, stdev=1075.21 01:00:54.429 lat (usec): min=860, max=42775, avg=2904.30, stdev=1075.33 01:00:54.429 clat percentiles (usec): 01:00:54.429 | 1.00th=[ 1958], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2540], 01:00:54.429 | 30.00th=[ 2704], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2933], 01:00:54.429 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3425], 95.00th=[ 4015], 01:00:54.429 | 99.00th=[ 4490], 99.50th=[ 4686], 99.90th=[ 4948], 99.95th=[42730], 01:00:54.429 | 99.99th=[42730] 01:00:54.429 bw ( KiB/s): min=20944, max=22688, per=25.65%, avg=21879.11, stdev=777.16, samples=9 01:00:54.429 iops : min= 2618, max= 2836, avg=2734.89, stdev=97.14, samples=9 01:00:54.429 lat (usec) : 1000=0.01% 01:00:54.429 lat (msec) : 2=1.53%, 4=93.36%, 10=5.04%, 50=0.06% 01:00:54.429 cpu : usr=96.24%, sys=3.44%, ctx=9, majf=0, minf=0 01:00:54.429 IO depths : 1=0.4%, 2=4.2%, 4=68.1%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:54.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:54.429 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:54.429 issued rwts: total=13695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:54.429 latency : target=0, window=0, percentile=100.00%, depth=8 01:00:54.429 filename0: (groupid=0, jobs=1): err= 0: pid=2435998: Tue Jun 11 03:57:35 2024 01:00:54.429 read: IOPS=2612, BW=20.4MiB/s (21.4MB/s)(102MiB/5004msec) 01:00:54.429 slat (nsec): min=6063, max=43552, avg=9160.51, stdev=3146.20 01:00:54.429 clat (usec): min=885, max=8661, avg=3035.57, stdev=498.21 01:00:54.429 lat (usec): min=897, max=8668, avg=3044.73, stdev=497.80 01:00:54.429 clat percentiles (usec): 01:00:54.429 | 1.00th=[ 2180], 5.00th=[ 2507], 10.00th=[ 2638], 20.00th=[ 2737], 01:00:54.429 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 01:00:54.429 | 70.00th=[ 2999], 80.00th=[ 3195], 90.00th=[ 3720], 95.00th=[ 4228], 01:00:54.429 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5276], 99.95th=[ 5866], 01:00:54.429 | 99.99th=[ 8717] 01:00:54.429 bw ( KiB/s): min=20400, max=21872, per=24.57%, avg=20958.22, stdev=469.31, samples=9 01:00:54.429 iops : min= 2550, max= 2734, avg=2619.78, stdev=58.66, samples=9 01:00:54.429 lat (usec) : 1000=0.02% 01:00:54.429 lat (msec) : 2=0.41%, 4=91.94%, 10=7.63% 01:00:54.429 cpu : usr=95.94%, sys=3.72%, ctx=10, majf=0, minf=9 01:00:54.429 IO depths : 1=0.4%, 2=2.0%, 4=70.0%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:54.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:54.429 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:54.429 issued rwts: total=13073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:54.429 latency : target=0, window=0, percentile=100.00%, depth=8 01:00:54.430 filename1: (groupid=0, jobs=1): err= 0: pid=2435999: Tue Jun 11 03:57:35 2024 01:00:54.430 read: IOPS=2688, BW=21.0MiB/s (22.0MB/s)(105MiB/5005msec) 01:00:54.430 slat (nsec): min=6053, max=42184, avg=9301.87, stdev=3124.53 01:00:54.430 clat (usec): min=1183, max=6925, avg=2948.69, stdev=484.28 01:00:54.430 lat (usec): min=1207, max=6936, avg=2957.99, stdev=484.06 01:00:54.430 clat percentiles (usec): 01:00:54.430 | 1.00th=[ 1975], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2671], 01:00:54.430 | 30.00th=[ 2769], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2933], 01:00:54.430 | 70.00th=[ 2966], 80.00th=[ 3064], 90.00th=[ 3523], 95.00th=[ 4015], 01:00:54.430 | 99.00th=[ 4686], 99.50th=[ 4752], 99.90th=[ 5276], 99.95th=[ 6194], 01:00:54.430 | 99.99th=[ 6915] 01:00:54.430 bw ( KiB/s): min=21088, max=22080, per=25.20%, avg=21500.44, stdev=362.37, samples=9 01:00:54.430 iops : min= 2636, max= 2760, avg=2687.56, stdev=45.30, samples=9 01:00:54.430 lat (msec) : 2=1.11%, 4=93.76%, 10=5.13% 01:00:54.430 cpu : usr=95.54%, sys=4.14%, ctx=8, majf=0, minf=9 01:00:54.430 IO depths : 1=0.1%, 2=1.7%, 4=70.8%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:54.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:54.430 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:54.430 issued rwts: total=13458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:54.430 latency : target=0, window=0, percentile=100.00%, depth=8 01:00:54.430 filename1: (groupid=0, jobs=1): err= 0: pid=2436000: Tue Jun 11 03:57:35 2024 01:00:54.430 read: IOPS=2627, BW=20.5MiB/s (21.5MB/s)(103MiB/5003msec) 01:00:54.430 slat (nsec): min=6046, max=32403, avg=8831.89, stdev=3043.28 01:00:54.430 clat (usec): min=639, max=6457, avg=3019.97, stdev=498.73 01:00:54.430 lat (usec): min=650, max=6463, avg=3028.80, stdev=498.34 01:00:54.430 clat percentiles (usec): 01:00:54.430 | 1.00th=[ 2180], 5.00th=[ 2474], 10.00th=[ 2606], 20.00th=[ 2737], 01:00:54.430 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2966], 01:00:54.430 | 70.00th=[ 2999], 80.00th=[ 3163], 90.00th=[ 3752], 95.00th=[ 4228], 01:00:54.430 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5342], 99.95th=[ 5669], 01:00:54.430 | 99.99th=[ 6456] 01:00:54.430 bw ( KiB/s): min=20304, max=21744, per=24.71%, avg=21084.44, stdev=453.38, samples=9 01:00:54.430 iops : min= 2538, max= 2718, avg=2635.56, stdev=56.67, samples=9 01:00:54.430 lat (usec) : 750=0.01% 01:00:54.430 lat (msec) : 2=0.42%, 4=91.61%, 10=7.96% 01:00:54.430 cpu : usr=95.66%, sys=4.02%, ctx=14, majf=0, minf=9 01:00:54.430 IO depths : 1=0.1%, 2=1.0%, 4=71.6%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 01:00:54.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:54.430 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:00:54.430 issued rwts: total=13146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:00:54.430 latency : target=0, window=0, percentile=100.00%, depth=8 01:00:54.430 01:00:54.430 Run status group 0 (all jobs): 01:00:54.430 READ: bw=83.3MiB/s (87.4MB/s), 20.4MiB/s-21.4MiB/s (21.4MB/s-22.4MB/s), io=417MiB (437MB), run=5003-5005msec 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:54.430 01:00:54.430 real 0m24.168s 01:00:54.430 user 4m52.694s 01:00:54.430 sys 0m4.741s 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # xtrace_disable 01:00:54.430 03:57:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 01:00:54.430 ************************************ 01:00:54.430 END TEST fio_dif_rand_params 01:00:54.430 ************************************ 01:00:54.430 03:57:35 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 01:00:54.430 03:57:35 nvmf_dif -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 01:00:54.430 03:57:35 nvmf_dif -- common/autotest_common.sh@1106 -- # xtrace_disable 01:00:54.430 03:57:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:00:54.430 ************************************ 01:00:54.430 START TEST fio_dif_digest 01:00:54.430 ************************************ 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # fio_dif_digest 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:00:54.430 bdev_null0 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:00:54.430 [2024-06-11 03:57:35.511911] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 01:00:54.430 03:57:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 01:00:54.430 { 01:00:54.430 "params": { 01:00:54.430 "name": "Nvme$subsystem", 01:00:54.430 "trtype": "$TEST_TRANSPORT", 01:00:54.431 "traddr": "$NVMF_FIRST_TARGET_IP", 01:00:54.431 "adrfam": "ipv4", 01:00:54.431 "trsvcid": "$NVMF_PORT", 01:00:54.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 01:00:54.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 01:00:54.431 "hdgst": ${hdgst:-false}, 01:00:54.431 "ddgst": ${ddgst:-false} 01:00:54.431 }, 01:00:54.431 "method": "bdev_nvme_attach_controller" 01:00:54.431 } 01:00:54.431 EOF 01:00:54.431 )") 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1355 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local fio_dir=/usr/src/fio 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # sanitizers=('libasan' 'libclang_rt.asan') 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local sanitizers 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # shift 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local asan_lib= 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libasan 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 01:00:54.431 "params": { 01:00:54.431 "name": "Nvme0", 01:00:54.431 "trtype": "tcp", 01:00:54.431 "traddr": "10.0.0.2", 01:00:54.431 "adrfam": "ipv4", 01:00:54.431 "trsvcid": "4420", 01:00:54.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:00:54.431 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:00:54.431 "hdgst": true, 01:00:54.431 "ddgst": true 01:00:54.431 }, 01:00:54.431 "method": "bdev_nvme_attach_controller" 01:00:54.431 }' 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # for sanitizer in "${sanitizers[@]}" 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # grep libclang_rt.asan 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # awk '{print $3}' 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # asan_lib= 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # [[ -n '' ]] 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 01:00:54.431 03:57:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 01:00:54.691 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 01:00:54.691 ... 01:00:54.691 fio-3.35 01:00:54.691 Starting 3 threads 01:00:54.691 EAL: No free 2048 kB hugepages reported on node 1 01:01:06.917 01:01:06.917 filename0: (groupid=0, jobs=1): err= 0: pid=2437089: Tue Jun 11 03:57:46 2024 01:01:06.917 read: IOPS=272, BW=34.0MiB/s (35.7MB/s)(342MiB/10045msec) 01:01:06.917 slat (nsec): min=6335, max=26771, avg=11588.97, stdev=1938.28 01:01:06.917 clat (usec): min=7915, max=54336, avg=10992.14, stdev=1929.63 01:01:06.917 lat (usec): min=7926, max=54348, avg=11003.73, stdev=1929.61 01:01:06.917 clat percentiles (usec): 01:01:06.917 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 01:01:06.917 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 01:01:06.917 | 70.00th=[11338], 80.00th=[11600], 90.00th=[12125], 95.00th=[12518], 01:01:06.917 | 99.00th=[13173], 99.50th=[13698], 99.90th=[52691], 99.95th=[53740], 01:01:06.917 | 99.99th=[54264] 01:01:06.917 bw ( KiB/s): min=32512, max=36096, per=32.40%, avg=34973.10, stdev=977.63, samples=20 01:01:06.917 iops : min= 254, max= 282, avg=273.20, stdev= 7.63, samples=20 01:01:06.917 lat (msec) : 10=12.87%, 20=86.94%, 50=0.07%, 100=0.11% 01:01:06.917 cpu : usr=94.52%, sys=5.18%, ctx=19, majf=0, minf=125 01:01:06.917 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:01:06.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:06.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:06.917 issued rwts: total=2734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:01:06.917 latency : target=0, window=0, percentile=100.00%, depth=3 01:01:06.917 filename0: (groupid=0, jobs=1): err= 0: pid=2437090: Tue Jun 11 03:57:46 2024 01:01:06.917 read: IOPS=289, BW=36.1MiB/s (37.9MB/s)(362MiB/10006msec) 01:01:06.917 slat (usec): min=6, max=139, avg=11.44, stdev= 3.25 01:01:06.917 clat (usec): min=6470, max=14795, avg=10360.72, stdev=800.89 01:01:06.917 lat (usec): min=6483, max=14802, avg=10372.16, stdev=800.91 01:01:06.917 clat percentiles (usec): 01:01:06.917 | 1.00th=[ 8291], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 01:01:06.917 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 01:01:06.917 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 01:01:06.917 | 99.00th=[12256], 99.50th=[12518], 99.90th=[13698], 99.95th=[14091], 01:01:06.917 | 99.99th=[14746] 01:01:06.917 bw ( KiB/s): min=35328, max=38144, per=34.32%, avg=37039.16, stdev=682.95, samples=19 01:01:06.917 iops : min= 276, max= 298, avg=289.37, stdev= 5.34, samples=19 01:01:06.917 lat (msec) : 10=30.38%, 20=69.62% 01:01:06.917 cpu : usr=94.37%, sys=5.33%, ctx=21, majf=0, minf=151 01:01:06.917 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:01:06.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:06.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:06.917 issued rwts: total=2893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:01:06.917 latency : target=0, window=0, percentile=100.00%, depth=3 01:01:06.917 filename0: (groupid=0, jobs=1): err= 0: pid=2437092: Tue Jun 11 03:57:46 2024 01:01:06.918 read: IOPS=283, BW=35.4MiB/s (37.1MB/s)(355MiB/10043msec) 01:01:06.918 slat (nsec): min=6406, max=24622, avg=11356.75, stdev=1937.75 01:01:06.918 clat (usec): min=6778, max=46683, avg=10570.04, stdev=1256.73 01:01:06.918 lat (usec): min=6791, max=46695, avg=10581.40, stdev=1256.70 01:01:06.918 clat percentiles (usec): 01:01:06.918 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 01:01:06.918 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 01:01:06.918 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 01:01:06.918 | 99.00th=[12649], 99.50th=[12780], 99.90th=[14484], 99.95th=[45351], 01:01:06.918 | 99.99th=[46924] 01:01:06.918 bw ( KiB/s): min=35072, max=37632, per=33.69%, avg=36364.80, stdev=566.22, samples=20 01:01:06.918 iops : min= 274, max= 294, avg=284.10, stdev= 4.42, samples=20 01:01:06.918 lat (msec) : 10=24.31%, 20=75.62%, 50=0.07% 01:01:06.918 cpu : usr=94.90%, sys=4.80%, ctx=25, majf=0, minf=116 01:01:06.918 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 01:01:06.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:06.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 01:01:06.918 issued rwts: total=2843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 01:01:06.918 latency : target=0, window=0, percentile=100.00%, depth=3 01:01:06.918 01:01:06.918 Run status group 0 (all jobs): 01:01:06.918 READ: bw=105MiB/s (111MB/s), 34.0MiB/s-36.1MiB/s (35.7MB/s-37.9MB/s), io=1059MiB (1110MB), run=10006-10045msec 01:01:06.918 03:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 01:01:06.918 03:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 01:01:06.918 03:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 01:01:06.918 03:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 01:01:06.918 03:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 01:01:06.918 03:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 01:01:06.918 03:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 01:01:06.918 03:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:01:06.918 03:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:01:06.918 03:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 01:01:06.918 03:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 01:01:06.918 03:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:01:06.918 03:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:01:06.918 01:01:06.918 real 0m10.974s 01:01:06.918 user 0m35.135s 01:01:06.918 sys 0m1.803s 01:01:06.918 03:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # xtrace_disable 01:01:06.918 03:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 01:01:06.918 ************************************ 01:01:06.918 END TEST fio_dif_digest 01:01:06.918 ************************************ 01:01:06.918 03:57:46 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 01:01:06.918 03:57:46 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 01:01:06.918 03:57:46 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 01:01:06.918 03:57:46 nvmf_dif -- nvmf/common.sh@117 -- # sync 01:01:06.918 03:57:46 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:01:06.918 03:57:46 nvmf_dif -- nvmf/common.sh@120 -- # set +e 01:01:06.918 03:57:46 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 01:01:06.918 03:57:46 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:01:06.918 rmmod nvme_tcp 01:01:06.918 rmmod nvme_fabrics 01:01:06.918 rmmod nvme_keyring 01:01:06.918 03:57:46 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:01:06.918 03:57:46 nvmf_dif -- nvmf/common.sh@124 -- # set -e 01:01:06.918 03:57:46 nvmf_dif -- nvmf/common.sh@125 -- # return 0 01:01:06.918 03:57:46 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2428382 ']' 01:01:06.918 03:57:46 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2428382 01:01:06.918 03:57:46 nvmf_dif -- common/autotest_common.sh@949 -- # '[' -z 2428382 ']' 01:01:06.918 03:57:46 nvmf_dif -- common/autotest_common.sh@953 -- # kill -0 2428382 01:01:06.918 03:57:46 nvmf_dif -- common/autotest_common.sh@954 -- # uname 01:01:06.918 03:57:46 nvmf_dif -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 01:01:06.918 03:57:46 nvmf_dif -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2428382 01:01:06.918 03:57:46 nvmf_dif -- common/autotest_common.sh@955 -- # process_name=reactor_0 01:01:06.918 03:57:46 nvmf_dif -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 01:01:06.918 03:57:46 nvmf_dif -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2428382' 01:01:06.918 killing process with pid 2428382 01:01:06.918 03:57:46 nvmf_dif -- common/autotest_common.sh@968 -- # kill 2428382 01:01:06.918 03:57:46 nvmf_dif -- common/autotest_common.sh@973 -- # wait 2428382 01:01:06.918 03:57:46 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 01:01:06.918 03:57:46 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 01:01:07.855 Waiting for block devices as requested 01:01:07.855 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 01:01:08.115 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 01:01:08.115 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 01:01:08.115 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 01:01:08.376 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 01:01:08.376 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 01:01:08.376 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 01:01:08.376 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 01:01:08.635 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 01:01:08.635 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 01:01:08.635 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 01:01:08.635 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 01:01:08.894 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 01:01:08.894 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 01:01:08.894 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 01:01:09.153 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 01:01:09.153 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 01:01:09.153 03:57:50 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:01:09.153 03:57:50 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:01:09.153 03:57:50 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:01:09.153 03:57:50 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 01:01:09.153 03:57:50 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:09.153 03:57:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:01:09.153 03:57:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:11.687 03:57:52 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 01:01:11.687 01:01:11.687 real 1m13.429s 01:01:11.687 user 7m8.786s 01:01:11.687 sys 0m19.438s 01:01:11.687 03:57:52 nvmf_dif -- common/autotest_common.sh@1125 -- # xtrace_disable 01:01:11.687 03:57:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 01:01:11.687 ************************************ 01:01:11.687 END TEST nvmf_dif 01:01:11.687 ************************************ 01:01:11.687 03:57:52 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 01:01:11.687 03:57:52 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 01:01:11.687 03:57:52 -- common/autotest_common.sh@1106 -- # xtrace_disable 01:01:11.687 03:57:52 -- common/autotest_common.sh@10 -- # set +x 01:01:11.687 ************************************ 01:01:11.687 START TEST nvmf_abort_qd_sizes 01:01:11.687 ************************************ 01:01:11.687 03:57:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 01:01:11.687 * Looking for test storage... 01:01:11.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 01:01:11.687 03:57:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:01:11.687 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 01:01:11.687 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 01:01:11.688 03:57:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 01:01:16.959 Found 0000:86:00.0 (0x8086 - 0x159b) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 01:01:16.959 Found 0000:86:00.1 (0x8086 - 0x159b) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 01:01:16.959 Found net devices under 0000:86:00.0: cvl_0_0 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 01:01:16.959 Found net devices under 0000:86:00.1: cvl_0_1 01:01:16.959 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 01:01:16.960 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 01:01:17.219 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 01:01:17.219 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 01:01:17.219 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 01:01:17.219 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 01:01:17.219 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 01:01:17.219 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 01:01:17.219 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 01:01:17.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 01:01:17.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 01:01:17.219 01:01:17.219 --- 10.0.0.2 ping statistics --- 01:01:17.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:17.219 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 01:01:17.219 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 01:01:17.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 01:01:17.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 01:01:17.219 01:01:17.219 --- 10.0.0.1 ping statistics --- 01:01:17.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 01:01:17.219 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 01:01:17.219 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 01:01:17.219 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 01:01:17.219 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 01:01:17.219 03:57:58 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 01:01:19.755 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 01:01:19.755 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 01:01:19.755 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 01:01:19.755 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 01:01:19.755 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 01:01:19.755 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 01:01:19.755 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 01:01:19.755 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 01:01:19.755 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 01:01:19.755 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 01:01:19.755 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 01:01:19.755 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 01:01:19.755 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 01:01:19.755 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 01:01:19.755 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 01:01:19.755 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 01:01:21.151 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@723 -- # xtrace_disable 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2445535 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2445535 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # '[' -z 2445535 ']' 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local max_retries=100 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:01:21.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # xtrace_disable 01:01:21.429 03:58:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:01:21.429 [2024-06-11 03:58:02.697505] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 01:01:21.429 [2024-06-11 03:58:02.697546] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 01:01:21.429 EAL: No free 2048 kB hugepages reported on node 1 01:01:21.429 [2024-06-11 03:58:02.759880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 01:01:21.429 [2024-06-11 03:58:02.802969] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 01:01:21.429 [2024-06-11 03:58:02.803008] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 01:01:21.429 [2024-06-11 03:58:02.803021] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 01:01:21.429 [2024-06-11 03:58:02.803027] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 01:01:21.429 [2024-06-11 03:58:02.803032] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 01:01:21.429 [2024-06-11 03:58:02.803077] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 01:01:21.429 [2024-06-11 03:58:02.803176] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 01:01:21.429 [2024-06-11 03:58:02.803263] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 01:01:21.429 [2024-06-11 03:58:02.803264] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@859 -- # (( i == 0 )) 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@863 -- # return 0 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@729 -- # xtrace_disable 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5f:00.0 ]] 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5f:00.0 ]] 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5f:00.0 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5f:00.0 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 01:01:21.688 03:58:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:01:21.688 ************************************ 01:01:21.688 START TEST spdk_target_abort 01:01:21.688 ************************************ 01:01:21.688 03:58:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # spdk_target 01:01:21.688 03:58:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 01:01:21.688 03:58:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5f:00.0 -b spdk_target 01:01:21.688 03:58:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 01:01:21.688 03:58:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:01:24.973 spdk_targetn1 01:01:24.973 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:01:24.974 [2024-06-11 03:58:05.821500] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:01:24.974 [2024-06-11 03:58:05.854604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:01:24.974 03:58:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:01:24.974 EAL: No free 2048 kB hugepages reported on node 1 01:01:28.259 Initializing NVMe Controllers 01:01:28.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:01:28.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:01:28.259 Initialization complete. Launching workers. 01:01:28.259 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14331, failed: 0 01:01:28.259 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1449, failed to submit 12882 01:01:28.259 success 759, unsuccess 690, failed 0 01:01:28.259 03:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:01:28.259 03:58:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:01:28.259 EAL: No free 2048 kB hugepages reported on node 1 01:01:31.547 Initializing NVMe Controllers 01:01:31.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:01:31.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:01:31.547 Initialization complete. Launching workers. 01:01:31.547 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8521, failed: 0 01:01:31.547 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1248, failed to submit 7273 01:01:31.547 success 319, unsuccess 929, failed 0 01:01:31.547 03:58:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:01:31.547 03:58:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:01:31.547 EAL: No free 2048 kB hugepages reported on node 1 01:01:34.835 Initializing NVMe Controllers 01:01:34.835 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 01:01:34.835 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:01:34.835 Initialization complete. Launching workers. 01:01:34.835 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38639, failed: 0 01:01:34.835 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2793, failed to submit 35846 01:01:34.835 success 607, unsuccess 2186, failed 0 01:01:34.835 03:58:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 01:01:34.835 03:58:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 01:01:34.835 03:58:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:01:34.835 03:58:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:01:34.835 03:58:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 01:01:34.835 03:58:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 01:01:34.835 03:58:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2445535 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@949 -- # '[' -z 2445535 ']' 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # kill -0 2445535 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # uname 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2445535 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # process_name=reactor_0 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2445535' 01:01:36.739 killing process with pid 2445535 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # kill 2445535 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # wait 2445535 01:01:36.739 01:01:36.739 real 0m14.928s 01:01:36.739 user 0m57.109s 01:01:36.739 sys 0m2.412s 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 01:01:36.739 ************************************ 01:01:36.739 END TEST spdk_target_abort 01:01:36.739 ************************************ 01:01:36.739 03:58:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 01:01:36.739 03:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 01:01:36.739 03:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1106 -- # xtrace_disable 01:01:36.739 03:58:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:01:36.739 ************************************ 01:01:36.739 START TEST kernel_target_abort 01:01:36.739 ************************************ 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # kernel_target 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 01:01:36.739 03:58:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 01:01:36.739 03:58:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 01:01:36.739 03:58:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 01:01:39.271 Waiting for block devices as requested 01:01:39.271 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 01:01:39.271 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 01:01:39.530 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 01:01:39.530 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 01:01:39.530 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 01:01:39.530 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 01:01:39.788 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 01:01:39.788 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 01:01:39.788 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 01:01:40.046 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 01:01:40.046 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 01:01:40.046 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 01:01:40.046 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 01:01:40.305 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 01:01:40.305 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 01:01:40.305 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 01:01:40.563 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # local device=nvme0n1 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ none != none ]] 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 01:01:40.563 No valid GPT data, bailing 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 01:01:40.563 01:01:40.563 Discovery Log Number of Records 2, Generation counter 2 01:01:40.563 =====Discovery Log Entry 0====== 01:01:40.563 trtype: tcp 01:01:40.563 adrfam: ipv4 01:01:40.563 subtype: current discovery subsystem 01:01:40.563 treq: not specified, sq flow control disable supported 01:01:40.563 portid: 1 01:01:40.563 trsvcid: 4420 01:01:40.563 subnqn: nqn.2014-08.org.nvmexpress.discovery 01:01:40.563 traddr: 10.0.0.1 01:01:40.563 eflags: none 01:01:40.563 sectype: none 01:01:40.563 =====Discovery Log Entry 1====== 01:01:40.563 trtype: tcp 01:01:40.563 adrfam: ipv4 01:01:40.563 subtype: nvme subsystem 01:01:40.563 treq: not specified, sq flow control disable supported 01:01:40.563 portid: 1 01:01:40.563 trsvcid: 4420 01:01:40.563 subnqn: nqn.2016-06.io.spdk:testnqn 01:01:40.563 traddr: 10.0.0.1 01:01:40.563 eflags: none 01:01:40.563 sectype: none 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:01:40.563 03:58:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:01:40.822 EAL: No free 2048 kB hugepages reported on node 1 01:01:44.102 Initializing NVMe Controllers 01:01:44.102 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:01:44.102 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:01:44.102 Initialization complete. Launching workers. 01:01:44.102 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84008, failed: 0 01:01:44.102 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 84008, failed to submit 0 01:01:44.102 success 0, unsuccess 84008, failed 0 01:01:44.102 03:58:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:01:44.102 03:58:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:01:44.102 EAL: No free 2048 kB hugepages reported on node 1 01:01:47.387 Initializing NVMe Controllers 01:01:47.387 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:01:47.387 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:01:47.387 Initialization complete. Launching workers. 01:01:47.387 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 138957, failed: 0 01:01:47.387 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35122, failed to submit 103835 01:01:47.387 success 0, unsuccess 35122, failed 0 01:01:47.387 03:58:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 01:01:47.387 03:58:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 01:01:47.387 EAL: No free 2048 kB hugepages reported on node 1 01:01:49.945 Initializing NVMe Controllers 01:01:49.945 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 01:01:49.945 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 01:01:49.945 Initialization complete. Launching workers. 01:01:49.945 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 133392, failed: 0 01:01:49.946 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33398, failed to submit 99994 01:01:49.946 success 0, unsuccess 33398, failed 0 01:01:49.946 03:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 01:01:49.946 03:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 01:01:49.946 03:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 01:01:49.946 03:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 01:01:49.946 03:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 01:01:49.946 03:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 01:01:49.946 03:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 01:01:49.946 03:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 01:01:49.946 03:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 01:01:49.946 03:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 01:01:52.482 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 01:01:52.482 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 01:01:52.482 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 01:01:52.482 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 01:01:52.482 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 01:01:52.482 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 01:01:52.482 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 01:01:52.482 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 01:01:52.482 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 01:01:52.482 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 01:01:52.482 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 01:01:52.482 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 01:01:52.482 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 01:01:52.482 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 01:01:52.482 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 01:01:52.482 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 01:01:53.862 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 01:01:54.121 01:01:54.121 real 0m17.307s 01:01:54.121 user 0m7.904s 01:01:54.121 sys 0m4.932s 01:01:54.121 03:58:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 01:01:54.121 03:58:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 01:01:54.121 ************************************ 01:01:54.121 END TEST kernel_target_abort 01:01:54.121 ************************************ 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 01:01:54.121 rmmod nvme_tcp 01:01:54.121 rmmod nvme_fabrics 01:01:54.121 rmmod nvme_keyring 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2445535 ']' 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2445535 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@949 -- # '[' -z 2445535 ']' 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@953 -- # kill -0 2445535 01:01:54.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (2445535) - No such process 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@976 -- # echo 'Process with pid 2445535 is not found' 01:01:54.121 Process with pid 2445535 is not found 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 01:01:54.121 03:58:35 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 01:01:57.410 Waiting for block devices as requested 01:01:57.410 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 01:01:57.411 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 01:01:57.411 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 01:01:57.411 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 01:01:57.411 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 01:01:57.411 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 01:01:57.411 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 01:01:57.670 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 01:01:57.670 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 01:01:57.670 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 01:01:57.670 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 01:01:57.929 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 01:01:57.929 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 01:01:57.929 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 01:01:58.187 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 01:01:58.187 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 01:01:58.187 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 01:01:58.187 03:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 01:01:58.187 03:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 01:01:58.187 03:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 01:01:58.187 03:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 01:01:58.187 03:58:39 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 01:01:58.187 03:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 01:01:58.187 03:58:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 01:02:00.719 03:58:41 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 01:02:00.719 01:02:00.719 real 0m49.050s 01:02:00.719 user 1m9.111s 01:02:00.719 sys 0m15.754s 01:02:00.719 03:58:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # xtrace_disable 01:02:00.719 03:58:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 01:02:00.719 ************************************ 01:02:00.719 END TEST nvmf_abort_qd_sizes 01:02:00.719 ************************************ 01:02:00.719 03:58:41 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 01:02:00.719 03:58:41 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 01:02:00.719 03:58:41 -- common/autotest_common.sh@1106 -- # xtrace_disable 01:02:00.719 03:58:41 -- common/autotest_common.sh@10 -- # set +x 01:02:00.719 ************************************ 01:02:00.719 START TEST keyring_file 01:02:00.719 ************************************ 01:02:00.719 03:58:41 keyring_file -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 01:02:00.719 * Looking for test storage... 01:02:00.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 01:02:00.719 03:58:41 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 01:02:00.719 03:58:41 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@7 -- # uname -s 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:00.719 03:58:41 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:02:00.719 03:58:41 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:00.719 03:58:41 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:00.719 03:58:41 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:00.720 03:58:41 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:00.720 03:58:41 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:00.720 03:58:41 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:00.720 03:58:41 keyring_file -- paths/export.sh@5 -- # export PATH 01:02:00.720 03:58:41 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@47 -- # : 0 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:02:00.720 03:58:41 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:02:00.720 03:58:41 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:02:00.720 03:58:41 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 01:02:00.720 03:58:41 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 01:02:00.720 03:58:41 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 01:02:00.720 03:58:41 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@17 -- # name=key0 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@17 -- # digest=0 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@18 -- # mktemp 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.u7BqU0YzlA 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@705 -- # python - 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.u7BqU0YzlA 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.u7BqU0YzlA 01:02:00.720 03:58:41 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.u7BqU0YzlA 01:02:00.720 03:58:41 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@17 -- # name=key1 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@17 -- # digest=0 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@18 -- # mktemp 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.g7AxLHmwKD 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 01:02:00.720 03:58:41 keyring_file -- nvmf/common.sh@705 -- # python - 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.g7AxLHmwKD 01:02:00.720 03:58:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.g7AxLHmwKD 01:02:00.720 03:58:41 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.g7AxLHmwKD 01:02:00.720 03:58:41 keyring_file -- keyring/file.sh@30 -- # tgtpid=2454594 01:02:00.720 03:58:41 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2454594 01:02:00.720 03:58:41 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 01:02:00.720 03:58:41 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 2454594 ']' 01:02:00.720 03:58:41 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:00.720 03:58:41 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 01:02:00.720 03:58:41 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:00.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:00.720 03:58:41 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 01:02:00.720 03:58:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:02:00.720 [2024-06-11 03:58:41.982351] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 01:02:00.720 [2024-06-11 03:58:41.982397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2454594 ] 01:02:00.720 EAL: No free 2048 kB hugepages reported on node 1 01:02:00.720 [2024-06-11 03:58:42.041043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:00.720 [2024-06-11 03:58:42.081564] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@863 -- # return 0 01:02:00.979 03:58:42 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:02:00.979 [2024-06-11 03:58:42.268045] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:00.979 null0 01:02:00.979 [2024-06-11 03:58:42.300092] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:02:00.979 [2024-06-11 03:58:42.300443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:02:00.979 [2024-06-11 03:58:42.308112] tcp.c:3671:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:02:00.979 03:58:42 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@649 -- # local es=0 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:02:00.979 [2024-06-11 03:58:42.320143] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 01:02:00.979 request: 01:02:00.979 { 01:02:00.979 "nqn": "nqn.2016-06.io.spdk:cnode0", 01:02:00.979 "secure_channel": false, 01:02:00.979 "listen_address": { 01:02:00.979 "trtype": "tcp", 01:02:00.979 "traddr": "127.0.0.1", 01:02:00.979 "trsvcid": "4420" 01:02:00.979 }, 01:02:00.979 "method": "nvmf_subsystem_add_listener", 01:02:00.979 "req_id": 1 01:02:00.979 } 01:02:00.979 Got JSON-RPC error response 01:02:00.979 response: 01:02:00.979 { 01:02:00.979 "code": -32602, 01:02:00.979 "message": "Invalid parameters" 01:02:00.979 } 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@652 -- # es=1 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 01:02:00.979 03:58:42 keyring_file -- keyring/file.sh@46 -- # bperfpid=2454599 01:02:00.979 03:58:42 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2454599 /var/tmp/bperf.sock 01:02:00.979 03:58:42 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 2454599 ']' 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:02:00.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 01:02:00.979 03:58:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:02:00.979 [2024-06-11 03:58:42.370455] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 01:02:00.979 [2024-06-11 03:58:42.370498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2454599 ] 01:02:01.238 EAL: No free 2048 kB hugepages reported on node 1 01:02:01.238 [2024-06-11 03:58:42.449366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:01.238 [2024-06-11 03:58:42.489659] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 01:02:01.805 03:58:43 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 01:02:01.805 03:58:43 keyring_file -- common/autotest_common.sh@863 -- # return 0 01:02:01.805 03:58:43 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.u7BqU0YzlA 01:02:01.805 03:58:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.u7BqU0YzlA 01:02:02.063 03:58:43 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.g7AxLHmwKD 01:02:02.063 03:58:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.g7AxLHmwKD 01:02:02.321 03:58:43 keyring_file -- keyring/file.sh@51 -- # get_key key0 01:02:02.321 03:58:43 keyring_file -- keyring/file.sh@51 -- # jq -r .path 01:02:02.321 03:58:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:02.321 03:58:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:02:02.321 03:58:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:02.321 03:58:43 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.u7BqU0YzlA == \/\t\m\p\/\t\m\p\.\u\7\B\q\U\0\Y\z\l\A ]] 01:02:02.321 03:58:43 keyring_file -- keyring/file.sh@52 -- # get_key key1 01:02:02.321 03:58:43 keyring_file -- keyring/file.sh@52 -- # jq -r .path 01:02:02.321 03:58:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:02.321 03:58:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:02:02.321 03:58:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:02.579 03:58:43 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.g7AxLHmwKD == \/\t\m\p\/\t\m\p\.\g\7\A\x\L\H\m\w\K\D ]] 01:02:02.579 03:58:43 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 01:02:02.579 03:58:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:02:02.579 03:58:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:02:02.579 03:58:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:02.579 03:58:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:02:02.579 03:58:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:02.837 03:58:44 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 01:02:02.837 03:58:44 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 01:02:02.837 03:58:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:02:02.837 03:58:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:02:02.837 03:58:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:02.837 03:58:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:02.837 03:58:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:02:02.837 03:58:44 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 01:02:02.837 03:58:44 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:02:02.837 03:58:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:02:03.095 [2024-06-11 03:58:44.326028] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:02:03.095 nvme0n1 01:02:03.095 03:58:44 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 01:02:03.095 03:58:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:02:03.095 03:58:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:02:03.095 03:58:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:03.096 03:58:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:03.096 03:58:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:02:03.354 03:58:44 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 01:02:03.354 03:58:44 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 01:02:03.354 03:58:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:02:03.354 03:58:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:02:03.354 03:58:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:03.354 03:58:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:03.354 03:58:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:02:03.612 03:58:44 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 01:02:03.612 03:58:44 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:02:03.612 Running I/O for 1 seconds... 01:02:04.547 01:02:04.547 Latency(us) 01:02:04.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:04.547 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 01:02:04.547 nvme0n1 : 1.01 15002.98 58.61 0.00 0.00 8503.58 5430.13 16103.13 01:02:04.547 =================================================================================================================== 01:02:04.548 Total : 15002.98 58.61 0.00 0.00 8503.58 5430.13 16103.13 01:02:04.548 0 01:02:04.548 03:58:45 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:02:04.548 03:58:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:02:04.805 03:58:46 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 01:02:04.805 03:58:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:02:04.805 03:58:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:02:04.805 03:58:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:04.805 03:58:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:02:04.805 03:58:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:05.064 03:58:46 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 01:02:05.064 03:58:46 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 01:02:05.064 03:58:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:02:05.064 03:58:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:02:05.064 03:58:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:05.064 03:58:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:02:05.064 03:58:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:05.064 03:58:46 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 01:02:05.064 03:58:46 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:02:05.064 03:58:46 keyring_file -- common/autotest_common.sh@649 -- # local es=0 01:02:05.064 03:58:46 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:02:05.064 03:58:46 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 01:02:05.064 03:58:46 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 01:02:05.064 03:58:46 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 01:02:05.064 03:58:46 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 01:02:05.064 03:58:46 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:02:05.064 03:58:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 01:02:05.322 [2024-06-11 03:58:46.564252] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:02:05.322 [2024-06-11 03:58:46.564715] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1584260 (107): Transport endpoint is not connected 01:02:05.322 [2024-06-11 03:58:46.565710] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1584260 (9): Bad file descriptor 01:02:05.322 [2024-06-11 03:58:46.566710] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:02:05.322 [2024-06-11 03:58:46.566721] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:02:05.322 [2024-06-11 03:58:46.566727] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:02:05.322 request: 01:02:05.322 { 01:02:05.322 "name": "nvme0", 01:02:05.322 "trtype": "tcp", 01:02:05.322 "traddr": "127.0.0.1", 01:02:05.322 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:02:05.322 "adrfam": "ipv4", 01:02:05.322 "trsvcid": "4420", 01:02:05.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:02:05.322 "psk": "key1", 01:02:05.322 "method": "bdev_nvme_attach_controller", 01:02:05.322 "req_id": 1 01:02:05.322 } 01:02:05.322 Got JSON-RPC error response 01:02:05.322 response: 01:02:05.322 { 01:02:05.322 "code": -5, 01:02:05.322 "message": "Input/output error" 01:02:05.322 } 01:02:05.322 03:58:46 keyring_file -- common/autotest_common.sh@652 -- # es=1 01:02:05.322 03:58:46 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 01:02:05.322 03:58:46 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 01:02:05.322 03:58:46 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 01:02:05.322 03:58:46 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 01:02:05.322 03:58:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:02:05.322 03:58:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:02:05.322 03:58:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:05.322 03:58:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:02:05.322 03:58:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:05.580 03:58:46 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 01:02:05.580 03:58:46 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 01:02:05.580 03:58:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:02:05.580 03:58:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:02:05.580 03:58:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:05.580 03:58:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:02:05.580 03:58:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:05.580 03:58:46 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 01:02:05.580 03:58:46 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 01:02:05.580 03:58:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:02:05.839 03:58:47 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 01:02:05.839 03:58:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 01:02:06.098 03:58:47 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 01:02:06.098 03:58:47 keyring_file -- keyring/file.sh@77 -- # jq length 01:02:06.098 03:58:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:06.098 03:58:47 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 01:02:06.098 03:58:47 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.u7BqU0YzlA 01:02:06.098 03:58:47 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.u7BqU0YzlA 01:02:06.098 03:58:47 keyring_file -- common/autotest_common.sh@649 -- # local es=0 01:02:06.098 03:58:47 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.u7BqU0YzlA 01:02:06.098 03:58:47 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 01:02:06.098 03:58:47 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 01:02:06.098 03:58:47 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 01:02:06.098 03:58:47 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 01:02:06.098 03:58:47 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.u7BqU0YzlA 01:02:06.098 03:58:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.u7BqU0YzlA 01:02:06.357 [2024-06-11 03:58:47.582709] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.u7BqU0YzlA': 0100660 01:02:06.357 [2024-06-11 03:58:47.582732] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 01:02:06.357 request: 01:02:06.357 { 01:02:06.357 "name": "key0", 01:02:06.357 "path": "/tmp/tmp.u7BqU0YzlA", 01:02:06.357 "method": "keyring_file_add_key", 01:02:06.357 "req_id": 1 01:02:06.357 } 01:02:06.357 Got JSON-RPC error response 01:02:06.357 response: 01:02:06.357 { 01:02:06.357 "code": -1, 01:02:06.357 "message": "Operation not permitted" 01:02:06.357 } 01:02:06.357 03:58:47 keyring_file -- common/autotest_common.sh@652 -- # es=1 01:02:06.357 03:58:47 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 01:02:06.357 03:58:47 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 01:02:06.357 03:58:47 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 01:02:06.357 03:58:47 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.u7BqU0YzlA 01:02:06.357 03:58:47 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.u7BqU0YzlA 01:02:06.357 03:58:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.u7BqU0YzlA 01:02:06.615 03:58:47 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.u7BqU0YzlA 01:02:06.615 03:58:47 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 01:02:06.615 03:58:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:02:06.615 03:58:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:02:06.615 03:58:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:06.615 03:58:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:02:06.615 03:58:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:06.615 03:58:47 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 01:02:06.615 03:58:47 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:02:06.615 03:58:47 keyring_file -- common/autotest_common.sh@649 -- # local es=0 01:02:06.615 03:58:47 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:02:06.615 03:58:47 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 01:02:06.615 03:58:47 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 01:02:06.615 03:58:47 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 01:02:06.615 03:58:47 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 01:02:06.615 03:58:47 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:02:06.615 03:58:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:02:06.875 [2024-06-11 03:58:48.096076] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.u7BqU0YzlA': No such file or directory 01:02:06.875 [2024-06-11 03:58:48.096096] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 01:02:06.875 [2024-06-11 03:58:48.096115] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 01:02:06.875 [2024-06-11 03:58:48.096121] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 01:02:06.875 [2024-06-11 03:58:48.096127] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 01:02:06.875 request: 01:02:06.875 { 01:02:06.875 "name": "nvme0", 01:02:06.875 "trtype": "tcp", 01:02:06.875 "traddr": "127.0.0.1", 01:02:06.875 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:02:06.875 "adrfam": "ipv4", 01:02:06.875 "trsvcid": "4420", 01:02:06.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:02:06.875 "psk": "key0", 01:02:06.875 "method": "bdev_nvme_attach_controller", 01:02:06.875 "req_id": 1 01:02:06.875 } 01:02:06.875 Got JSON-RPC error response 01:02:06.875 response: 01:02:06.875 { 01:02:06.875 "code": -19, 01:02:06.875 "message": "No such device" 01:02:06.875 } 01:02:06.875 03:58:48 keyring_file -- common/autotest_common.sh@652 -- # es=1 01:02:06.875 03:58:48 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 01:02:06.875 03:58:48 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 01:02:06.875 03:58:48 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 01:02:06.875 03:58:48 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 01:02:06.875 03:58:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:02:06.875 03:58:48 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 01:02:06.875 03:58:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 01:02:06.875 03:58:48 keyring_file -- keyring/common.sh@17 -- # name=key0 01:02:06.875 03:58:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:02:06.875 03:58:48 keyring_file -- keyring/common.sh@17 -- # digest=0 01:02:06.875 03:58:48 keyring_file -- keyring/common.sh@18 -- # mktemp 01:02:06.875 03:58:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jyLFFqBZWT 01:02:06.875 03:58:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:02:06.875 03:58:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:02:06.875 03:58:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 01:02:06.875 03:58:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:02:06.875 03:58:48 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 01:02:06.875 03:58:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 01:02:06.875 03:58:48 keyring_file -- nvmf/common.sh@705 -- # python - 01:02:07.134 03:58:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jyLFFqBZWT 01:02:07.134 03:58:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jyLFFqBZWT 01:02:07.134 03:58:48 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.jyLFFqBZWT 01:02:07.134 03:58:48 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jyLFFqBZWT 01:02:07.134 03:58:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jyLFFqBZWT 01:02:07.134 03:58:48 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:02:07.134 03:58:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:02:07.394 nvme0n1 01:02:07.394 03:58:48 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 01:02:07.394 03:58:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:02:07.394 03:58:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:02:07.394 03:58:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:02:07.394 03:58:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:07.394 03:58:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:07.653 03:58:48 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 01:02:07.653 03:58:48 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 01:02:07.653 03:58:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 01:02:07.911 03:58:49 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 01:02:07.911 03:58:49 keyring_file -- keyring/file.sh@101 -- # get_key key0 01:02:07.911 03:58:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:07.911 03:58:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:07.911 03:58:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:02:07.911 03:58:49 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 01:02:07.911 03:58:49 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 01:02:07.911 03:58:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:02:07.911 03:58:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:02:07.911 03:58:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:07.911 03:58:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:02:07.911 03:58:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:08.169 03:58:49 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 01:02:08.169 03:58:49 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:02:08.169 03:58:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:02:08.427 03:58:49 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 01:02:08.427 03:58:49 keyring_file -- keyring/file.sh@104 -- # jq length 01:02:08.427 03:58:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:08.427 03:58:49 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 01:02:08.427 03:58:49 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jyLFFqBZWT 01:02:08.427 03:58:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jyLFFqBZWT 01:02:08.684 03:58:49 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.g7AxLHmwKD 01:02:08.684 03:58:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.g7AxLHmwKD 01:02:08.684 03:58:50 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:02:08.684 03:58:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 01:02:08.942 nvme0n1 01:02:08.942 03:58:50 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 01:02:08.942 03:58:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 01:02:09.228 03:58:50 keyring_file -- keyring/file.sh@112 -- # config='{ 01:02:09.228 "subsystems": [ 01:02:09.228 { 01:02:09.228 "subsystem": "keyring", 01:02:09.228 "config": [ 01:02:09.228 { 01:02:09.228 "method": "keyring_file_add_key", 01:02:09.228 "params": { 01:02:09.228 "name": "key0", 01:02:09.228 "path": "/tmp/tmp.jyLFFqBZWT" 01:02:09.228 } 01:02:09.228 }, 01:02:09.228 { 01:02:09.228 "method": "keyring_file_add_key", 01:02:09.228 "params": { 01:02:09.228 "name": "key1", 01:02:09.228 "path": "/tmp/tmp.g7AxLHmwKD" 01:02:09.228 } 01:02:09.228 } 01:02:09.228 ] 01:02:09.228 }, 01:02:09.228 { 01:02:09.228 "subsystem": "iobuf", 01:02:09.228 "config": [ 01:02:09.228 { 01:02:09.228 "method": "iobuf_set_options", 01:02:09.228 "params": { 01:02:09.228 "small_pool_count": 8192, 01:02:09.228 "large_pool_count": 1024, 01:02:09.228 "small_bufsize": 8192, 01:02:09.228 "large_bufsize": 135168 01:02:09.228 } 01:02:09.228 } 01:02:09.228 ] 01:02:09.228 }, 01:02:09.228 { 01:02:09.228 "subsystem": "sock", 01:02:09.228 "config": [ 01:02:09.228 { 01:02:09.228 "method": "sock_set_default_impl", 01:02:09.228 "params": { 01:02:09.228 "impl_name": "posix" 01:02:09.228 } 01:02:09.228 }, 01:02:09.228 { 01:02:09.228 "method": "sock_impl_set_options", 01:02:09.228 "params": { 01:02:09.228 "impl_name": "ssl", 01:02:09.228 "recv_buf_size": 4096, 01:02:09.228 "send_buf_size": 4096, 01:02:09.228 "enable_recv_pipe": true, 01:02:09.228 "enable_quickack": false, 01:02:09.228 "enable_placement_id": 0, 01:02:09.228 "enable_zerocopy_send_server": true, 01:02:09.228 "enable_zerocopy_send_client": false, 01:02:09.228 "zerocopy_threshold": 0, 01:02:09.228 "tls_version": 0, 01:02:09.228 "enable_ktls": false 01:02:09.228 } 01:02:09.228 }, 01:02:09.228 { 01:02:09.228 "method": "sock_impl_set_options", 01:02:09.228 "params": { 01:02:09.228 "impl_name": "posix", 01:02:09.228 "recv_buf_size": 2097152, 01:02:09.228 "send_buf_size": 2097152, 01:02:09.228 "enable_recv_pipe": true, 01:02:09.228 "enable_quickack": false, 01:02:09.228 "enable_placement_id": 0, 01:02:09.228 "enable_zerocopy_send_server": true, 01:02:09.228 "enable_zerocopy_send_client": false, 01:02:09.228 "zerocopy_threshold": 0, 01:02:09.228 "tls_version": 0, 01:02:09.228 "enable_ktls": false 01:02:09.228 } 01:02:09.228 } 01:02:09.228 ] 01:02:09.228 }, 01:02:09.228 { 01:02:09.228 "subsystem": "vmd", 01:02:09.228 "config": [] 01:02:09.228 }, 01:02:09.228 { 01:02:09.228 "subsystem": "accel", 01:02:09.228 "config": [ 01:02:09.228 { 01:02:09.228 "method": "accel_set_options", 01:02:09.228 "params": { 01:02:09.228 "small_cache_size": 128, 01:02:09.228 "large_cache_size": 16, 01:02:09.228 "task_count": 2048, 01:02:09.228 "sequence_count": 2048, 01:02:09.228 "buf_count": 2048 01:02:09.228 } 01:02:09.228 } 01:02:09.228 ] 01:02:09.228 }, 01:02:09.228 { 01:02:09.228 "subsystem": "bdev", 01:02:09.228 "config": [ 01:02:09.228 { 01:02:09.228 "method": "bdev_set_options", 01:02:09.228 "params": { 01:02:09.228 "bdev_io_pool_size": 65535, 01:02:09.228 "bdev_io_cache_size": 256, 01:02:09.228 "bdev_auto_examine": true, 01:02:09.228 "iobuf_small_cache_size": 128, 01:02:09.228 "iobuf_large_cache_size": 16 01:02:09.228 } 01:02:09.228 }, 01:02:09.228 { 01:02:09.228 "method": "bdev_raid_set_options", 01:02:09.228 "params": { 01:02:09.228 "process_window_size_kb": 1024 01:02:09.228 } 01:02:09.228 }, 01:02:09.228 { 01:02:09.228 "method": "bdev_iscsi_set_options", 01:02:09.228 "params": { 01:02:09.228 "timeout_sec": 30 01:02:09.228 } 01:02:09.228 }, 01:02:09.228 { 01:02:09.228 "method": "bdev_nvme_set_options", 01:02:09.228 "params": { 01:02:09.228 "action_on_timeout": "none", 01:02:09.228 "timeout_us": 0, 01:02:09.228 "timeout_admin_us": 0, 01:02:09.228 "keep_alive_timeout_ms": 10000, 01:02:09.228 "arbitration_burst": 0, 01:02:09.228 "low_priority_weight": 0, 01:02:09.228 "medium_priority_weight": 0, 01:02:09.228 "high_priority_weight": 0, 01:02:09.228 "nvme_adminq_poll_period_us": 10000, 01:02:09.228 "nvme_ioq_poll_period_us": 0, 01:02:09.228 "io_queue_requests": 512, 01:02:09.228 "delay_cmd_submit": true, 01:02:09.228 "transport_retry_count": 4, 01:02:09.228 "bdev_retry_count": 3, 01:02:09.228 "transport_ack_timeout": 0, 01:02:09.228 "ctrlr_loss_timeout_sec": 0, 01:02:09.228 "reconnect_delay_sec": 0, 01:02:09.228 "fast_io_fail_timeout_sec": 0, 01:02:09.228 "disable_auto_failback": false, 01:02:09.228 "generate_uuids": false, 01:02:09.228 "transport_tos": 0, 01:02:09.228 "nvme_error_stat": false, 01:02:09.228 "rdma_srq_size": 0, 01:02:09.228 "io_path_stat": false, 01:02:09.228 "allow_accel_sequence": false, 01:02:09.228 "rdma_max_cq_size": 0, 01:02:09.228 "rdma_cm_event_timeout_ms": 0, 01:02:09.228 "dhchap_digests": [ 01:02:09.228 "sha256", 01:02:09.228 "sha384", 01:02:09.228 "sha512" 01:02:09.228 ], 01:02:09.228 "dhchap_dhgroups": [ 01:02:09.228 "null", 01:02:09.228 "ffdhe2048", 01:02:09.228 "ffdhe3072", 01:02:09.228 "ffdhe4096", 01:02:09.229 "ffdhe6144", 01:02:09.229 "ffdhe8192" 01:02:09.229 ] 01:02:09.229 } 01:02:09.229 }, 01:02:09.229 { 01:02:09.229 "method": "bdev_nvme_attach_controller", 01:02:09.229 "params": { 01:02:09.229 "name": "nvme0", 01:02:09.229 "trtype": "TCP", 01:02:09.229 "adrfam": "IPv4", 01:02:09.229 "traddr": "127.0.0.1", 01:02:09.229 "trsvcid": "4420", 01:02:09.229 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:02:09.229 "prchk_reftag": false, 01:02:09.229 "prchk_guard": false, 01:02:09.229 "ctrlr_loss_timeout_sec": 0, 01:02:09.229 "reconnect_delay_sec": 0, 01:02:09.229 "fast_io_fail_timeout_sec": 0, 01:02:09.229 "psk": "key0", 01:02:09.229 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:02:09.229 "hdgst": false, 01:02:09.229 "ddgst": false 01:02:09.229 } 01:02:09.229 }, 01:02:09.229 { 01:02:09.229 "method": "bdev_nvme_set_hotplug", 01:02:09.229 "params": { 01:02:09.229 "period_us": 100000, 01:02:09.229 "enable": false 01:02:09.229 } 01:02:09.229 }, 01:02:09.229 { 01:02:09.229 "method": "bdev_wait_for_examine" 01:02:09.229 } 01:02:09.229 ] 01:02:09.229 }, 01:02:09.229 { 01:02:09.229 "subsystem": "nbd", 01:02:09.229 "config": [] 01:02:09.229 } 01:02:09.229 ] 01:02:09.229 }' 01:02:09.229 03:58:50 keyring_file -- keyring/file.sh@114 -- # killprocess 2454599 01:02:09.229 03:58:50 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 2454599 ']' 01:02:09.229 03:58:50 keyring_file -- common/autotest_common.sh@953 -- # kill -0 2454599 01:02:09.229 03:58:50 keyring_file -- common/autotest_common.sh@954 -- # uname 01:02:09.229 03:58:50 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 01:02:09.229 03:58:50 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2454599 01:02:09.229 03:58:50 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 01:02:09.229 03:58:50 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 01:02:09.229 03:58:50 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2454599' 01:02:09.229 killing process with pid 2454599 01:02:09.229 03:58:50 keyring_file -- common/autotest_common.sh@968 -- # kill 2454599 01:02:09.229 Received shutdown signal, test time was about 1.000000 seconds 01:02:09.229 01:02:09.229 Latency(us) 01:02:09.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:09.229 =================================================================================================================== 01:02:09.229 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:02:09.229 03:58:50 keyring_file -- common/autotest_common.sh@973 -- # wait 2454599 01:02:09.488 03:58:50 keyring_file -- keyring/file.sh@117 -- # bperfpid=2456115 01:02:09.488 03:58:50 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2456115 /var/tmp/bperf.sock 01:02:09.488 03:58:50 keyring_file -- common/autotest_common.sh@830 -- # '[' -z 2456115 ']' 01:02:09.488 03:58:50 keyring_file -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 01:02:09.488 03:58:50 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 01:02:09.488 03:58:50 keyring_file -- common/autotest_common.sh@835 -- # local max_retries=100 01:02:09.488 03:58:50 keyring_file -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:02:09.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:02:09.488 03:58:50 keyring_file -- keyring/file.sh@115 -- # echo '{ 01:02:09.488 "subsystems": [ 01:02:09.488 { 01:02:09.488 "subsystem": "keyring", 01:02:09.488 "config": [ 01:02:09.488 { 01:02:09.488 "method": "keyring_file_add_key", 01:02:09.488 "params": { 01:02:09.488 "name": "key0", 01:02:09.488 "path": "/tmp/tmp.jyLFFqBZWT" 01:02:09.488 } 01:02:09.488 }, 01:02:09.488 { 01:02:09.488 "method": "keyring_file_add_key", 01:02:09.488 "params": { 01:02:09.488 "name": "key1", 01:02:09.488 "path": "/tmp/tmp.g7AxLHmwKD" 01:02:09.488 } 01:02:09.488 } 01:02:09.488 ] 01:02:09.488 }, 01:02:09.488 { 01:02:09.488 "subsystem": "iobuf", 01:02:09.488 "config": [ 01:02:09.488 { 01:02:09.488 "method": "iobuf_set_options", 01:02:09.488 "params": { 01:02:09.488 "small_pool_count": 8192, 01:02:09.488 "large_pool_count": 1024, 01:02:09.488 "small_bufsize": 8192, 01:02:09.488 "large_bufsize": 135168 01:02:09.488 } 01:02:09.488 } 01:02:09.488 ] 01:02:09.488 }, 01:02:09.488 { 01:02:09.488 "subsystem": "sock", 01:02:09.488 "config": [ 01:02:09.488 { 01:02:09.488 "method": "sock_set_default_impl", 01:02:09.488 "params": { 01:02:09.488 "impl_name": "posix" 01:02:09.488 } 01:02:09.488 }, 01:02:09.488 { 01:02:09.488 "method": "sock_impl_set_options", 01:02:09.488 "params": { 01:02:09.488 "impl_name": "ssl", 01:02:09.488 "recv_buf_size": 4096, 01:02:09.488 "send_buf_size": 4096, 01:02:09.488 "enable_recv_pipe": true, 01:02:09.488 "enable_quickack": false, 01:02:09.488 "enable_placement_id": 0, 01:02:09.488 "enable_zerocopy_send_server": true, 01:02:09.488 "enable_zerocopy_send_client": false, 01:02:09.488 "zerocopy_threshold": 0, 01:02:09.488 "tls_version": 0, 01:02:09.488 "enable_ktls": false 01:02:09.488 } 01:02:09.488 }, 01:02:09.488 { 01:02:09.488 "method": "sock_impl_set_options", 01:02:09.488 "params": { 01:02:09.488 "impl_name": "posix", 01:02:09.488 "recv_buf_size": 2097152, 01:02:09.488 "send_buf_size": 2097152, 01:02:09.488 "enable_recv_pipe": true, 01:02:09.488 "enable_quickack": false, 01:02:09.488 "enable_placement_id": 0, 01:02:09.488 "enable_zerocopy_send_server": true, 01:02:09.488 "enable_zerocopy_send_client": false, 01:02:09.488 "zerocopy_threshold": 0, 01:02:09.488 "tls_version": 0, 01:02:09.488 "enable_ktls": false 01:02:09.488 } 01:02:09.488 } 01:02:09.488 ] 01:02:09.488 }, 01:02:09.488 { 01:02:09.488 "subsystem": "vmd", 01:02:09.488 "config": [] 01:02:09.488 }, 01:02:09.488 { 01:02:09.488 "subsystem": "accel", 01:02:09.488 "config": [ 01:02:09.488 { 01:02:09.488 "method": "accel_set_options", 01:02:09.488 "params": { 01:02:09.488 "small_cache_size": 128, 01:02:09.488 "large_cache_size": 16, 01:02:09.488 "task_count": 2048, 01:02:09.488 "sequence_count": 2048, 01:02:09.488 "buf_count": 2048 01:02:09.488 } 01:02:09.488 } 01:02:09.488 ] 01:02:09.488 }, 01:02:09.488 { 01:02:09.488 "subsystem": "bdev", 01:02:09.488 "config": [ 01:02:09.488 { 01:02:09.488 "method": "bdev_set_options", 01:02:09.488 "params": { 01:02:09.488 "bdev_io_pool_size": 65535, 01:02:09.488 "bdev_io_cache_size": 256, 01:02:09.488 "bdev_auto_examine": true, 01:02:09.488 "iobuf_small_cache_size": 128, 01:02:09.488 "iobuf_large_cache_size": 16 01:02:09.488 } 01:02:09.488 }, 01:02:09.488 { 01:02:09.488 "method": "bdev_raid_set_options", 01:02:09.488 "params": { 01:02:09.488 "process_window_size_kb": 1024 01:02:09.488 } 01:02:09.488 }, 01:02:09.488 { 01:02:09.488 "method": "bdev_iscsi_set_options", 01:02:09.488 "params": { 01:02:09.488 "timeout_sec": 30 01:02:09.488 } 01:02:09.488 }, 01:02:09.488 { 01:02:09.488 "method": "bdev_nvme_set_options", 01:02:09.488 "params": { 01:02:09.488 "action_on_timeout": "none", 01:02:09.488 "timeout_us": 0, 01:02:09.488 "timeout_admin_us": 0, 01:02:09.488 "keep_alive_timeout_ms": 10000, 01:02:09.488 "arbitration_burst": 0, 01:02:09.488 "low_priority_weight": 0, 01:02:09.488 "medium_priority_weight": 0, 01:02:09.488 "high_priority_weight": 0, 01:02:09.488 "nvme_adminq_poll_period_us": 10000, 01:02:09.488 "nvme_ioq_poll_period_us": 0, 01:02:09.488 "io_queue_requests": 512, 01:02:09.488 "delay_cmd_submit": true, 01:02:09.488 "transport_retry_count": 4, 01:02:09.488 "bdev_retry_count": 3, 01:02:09.488 "transport_ack_timeout": 0, 01:02:09.488 "ctrlr_loss_timeout_sec": 0, 01:02:09.488 "reconnect_delay_sec": 0, 01:02:09.488 "fast_io_fail_timeout_sec": 0, 01:02:09.488 "disable_auto_failback": false, 01:02:09.488 "generate_uuids": false, 01:02:09.488 "transport_tos": 0, 01:02:09.488 "nvme_error_stat": false, 01:02:09.488 "rdma_srq_size": 0, 01:02:09.488 "io_path_stat": false, 01:02:09.488 "allow_accel_sequence": false, 01:02:09.488 "rdma_max_cq_size": 0, 01:02:09.488 "rdma_cm_event_timeout_ms": 0, 01:02:09.488 "dhchap_digests": [ 01:02:09.488 "sha256", 01:02:09.488 "sha384", 01:02:09.488 "sha512" 01:02:09.488 ], 01:02:09.488 "dhchap_dhgroups": [ 01:02:09.488 "null", 01:02:09.488 "ffdhe2048", 01:02:09.488 "ffdhe3072", 01:02:09.488 "ffdhe4096", 01:02:09.488 "ffdhe6144", 01:02:09.488 "ffdhe8192" 01:02:09.488 ] 01:02:09.488 } 01:02:09.488 }, 01:02:09.489 { 01:02:09.489 "method": "bdev_nvme_attach_controller", 01:02:09.489 "params": { 01:02:09.489 "name": "nvme0", 01:02:09.489 "trtype": "TCP", 01:02:09.489 "adrfam": "IPv4", 01:02:09.489 "traddr": "127.0.0.1", 01:02:09.489 "trsvcid": "4420", 01:02:09.489 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:02:09.489 "prchk_reftag": false, 01:02:09.489 "prchk_guard": false, 01:02:09.489 "ctrlr_loss_timeout_sec": 0, 01:02:09.489 "reconnect_delay_sec": 0, 01:02:09.489 "fast_io_fail_timeout_sec": 0, 01:02:09.489 "psk": "key0", 01:02:09.489 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:02:09.489 "hdgst": false, 01:02:09.489 "ddgst": false 01:02:09.489 } 01:02:09.489 }, 01:02:09.489 { 01:02:09.489 "method": "bdev_nvme_set_hotplug", 01:02:09.489 "params": { 01:02:09.489 "period_us": 100000, 01:02:09.489 "enable": false 01:02:09.489 } 01:02:09.489 }, 01:02:09.489 { 01:02:09.489 "method": "bdev_wait_for_examine" 01:02:09.489 } 01:02:09.489 ] 01:02:09.489 }, 01:02:09.489 { 01:02:09.489 "subsystem": "nbd", 01:02:09.489 "config": [] 01:02:09.489 } 01:02:09.489 ] 01:02:09.489 }' 01:02:09.489 03:58:50 keyring_file -- common/autotest_common.sh@839 -- # xtrace_disable 01:02:09.489 03:58:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:02:09.489 [2024-06-11 03:58:50.812752] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 01:02:09.489 [2024-06-11 03:58:50.812797] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2456115 ] 01:02:09.489 EAL: No free 2048 kB hugepages reported on node 1 01:02:09.489 [2024-06-11 03:58:50.871495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:09.747 [2024-06-11 03:58:50.912137] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 01:02:09.747 [2024-06-11 03:58:51.064948] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:02:10.314 03:58:51 keyring_file -- common/autotest_common.sh@859 -- # (( i == 0 )) 01:02:10.314 03:58:51 keyring_file -- common/autotest_common.sh@863 -- # return 0 01:02:10.314 03:58:51 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 01:02:10.314 03:58:51 keyring_file -- keyring/file.sh@120 -- # jq length 01:02:10.314 03:58:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:10.572 03:58:51 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 01:02:10.572 03:58:51 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 01:02:10.572 03:58:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 01:02:10.572 03:58:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:02:10.572 03:58:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:10.572 03:58:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 01:02:10.572 03:58:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:10.572 03:58:51 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 01:02:10.572 03:58:51 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 01:02:10.572 03:58:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 01:02:10.572 03:58:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 01:02:10.572 03:58:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:10.572 03:58:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 01:02:10.572 03:58:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:10.830 03:58:52 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 01:02:10.830 03:58:52 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 01:02:10.830 03:58:52 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 01:02:10.830 03:58:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 01:02:11.089 03:58:52 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 01:02:11.089 03:58:52 keyring_file -- keyring/file.sh@1 -- # cleanup 01:02:11.089 03:58:52 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.jyLFFqBZWT /tmp/tmp.g7AxLHmwKD 01:02:11.089 03:58:52 keyring_file -- keyring/file.sh@20 -- # killprocess 2456115 01:02:11.089 03:58:52 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 2456115 ']' 01:02:11.089 03:58:52 keyring_file -- common/autotest_common.sh@953 -- # kill -0 2456115 01:02:11.089 03:58:52 keyring_file -- common/autotest_common.sh@954 -- # uname 01:02:11.089 03:58:52 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 01:02:11.089 03:58:52 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2456115 01:02:11.089 03:58:52 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_1 01:02:11.089 03:58:52 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 01:02:11.089 03:58:52 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2456115' 01:02:11.089 killing process with pid 2456115 01:02:11.089 03:58:52 keyring_file -- common/autotest_common.sh@968 -- # kill 2456115 01:02:11.089 Received shutdown signal, test time was about 1.000000 seconds 01:02:11.089 01:02:11.089 Latency(us) 01:02:11.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:11.089 =================================================================================================================== 01:02:11.089 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 01:02:11.089 03:58:52 keyring_file -- common/autotest_common.sh@973 -- # wait 2456115 01:02:11.348 03:58:52 keyring_file -- keyring/file.sh@21 -- # killprocess 2454594 01:02:11.348 03:58:52 keyring_file -- common/autotest_common.sh@949 -- # '[' -z 2454594 ']' 01:02:11.348 03:58:52 keyring_file -- common/autotest_common.sh@953 -- # kill -0 2454594 01:02:11.348 03:58:52 keyring_file -- common/autotest_common.sh@954 -- # uname 01:02:11.348 03:58:52 keyring_file -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 01:02:11.348 03:58:52 keyring_file -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2454594 01:02:11.348 03:58:52 keyring_file -- common/autotest_common.sh@955 -- # process_name=reactor_0 01:02:11.348 03:58:52 keyring_file -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 01:02:11.348 03:58:52 keyring_file -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2454594' 01:02:11.348 killing process with pid 2454594 01:02:11.348 03:58:52 keyring_file -- common/autotest_common.sh@968 -- # kill 2454594 01:02:11.348 [2024-06-11 03:58:52.564896] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 01:02:11.348 03:58:52 keyring_file -- common/autotest_common.sh@973 -- # wait 2454594 01:02:11.606 01:02:11.606 real 0m11.155s 01:02:11.606 user 0m26.903s 01:02:11.606 sys 0m2.703s 01:02:11.606 03:58:52 keyring_file -- common/autotest_common.sh@1125 -- # xtrace_disable 01:02:11.606 03:58:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 01:02:11.606 ************************************ 01:02:11.606 END TEST keyring_file 01:02:11.606 ************************************ 01:02:11.606 03:58:52 -- spdk/autotest.sh@296 -- # [[ y == y ]] 01:02:11.606 03:58:52 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 01:02:11.606 03:58:52 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 01:02:11.606 03:58:52 -- common/autotest_common.sh@1106 -- # xtrace_disable 01:02:11.606 03:58:52 -- common/autotest_common.sh@10 -- # set +x 01:02:11.606 ************************************ 01:02:11.606 START TEST keyring_linux 01:02:11.606 ************************************ 01:02:11.606 03:58:52 keyring_linux -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 01:02:11.606 * Looking for test storage... 01:02:11.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 01:02:11.606 03:58:53 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 01:02:11.606 03:58:53 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 01:02:11.606 03:58:53 keyring_linux -- nvmf/common.sh@7 -- # uname -s 01:02:11.606 03:58:53 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 01:02:11.606 03:58:53 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 01:02:11.606 03:58:53 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 01:02:11.606 03:58:53 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 01:02:11.606 03:58:53 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 01:02:11.606 03:58:53 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 01:02:11.606 03:58:53 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 01:02:11.606 03:58:53 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 01:02:11.606 03:58:53 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 01:02:11.606 03:58:53 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 01:02:11.865 03:58:53 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 01:02:11.865 03:58:53 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 01:02:11.865 03:58:53 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 01:02:11.865 03:58:53 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 01:02:11.865 03:58:53 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 01:02:11.865 03:58:53 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 01:02:11.865 03:58:53 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:02:11.865 03:58:53 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 01:02:11.865 03:58:53 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:11.865 03:58:53 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:11.865 03:58:53 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:11.865 03:58:53 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:11.865 03:58:53 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:11.865 03:58:53 keyring_linux -- paths/export.sh@5 -- # export PATH 01:02:11.865 03:58:53 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:11.865 03:58:53 keyring_linux -- nvmf/common.sh@47 -- # : 0 01:02:11.865 03:58:53 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 01:02:11.865 03:58:53 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 01:02:11.865 03:58:53 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 01:02:11.865 03:58:53 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 01:02:11.865 03:58:53 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 01:02:11.865 03:58:53 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 01:02:11.865 03:58:53 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 01:02:11.865 03:58:53 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 01:02:11.865 03:58:53 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 01:02:11.865 03:58:53 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 01:02:11.865 03:58:53 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 01:02:11.865 03:58:53 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 01:02:11.865 03:58:53 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 01:02:11.865 03:58:53 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 01:02:11.865 03:58:53 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 01:02:11.866 03:58:53 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:02:11.866 03:58:53 keyring_linux -- keyring/common.sh@17 -- # name=key0 01:02:11.866 03:58:53 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 01:02:11.866 03:58:53 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:02:11.866 03:58:53 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 01:02:11.866 03:58:53 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 01:02:11.866 03:58:53 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 01:02:11.866 03:58:53 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 01:02:11.866 03:58:53 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:02:11.866 03:58:53 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 01:02:11.866 03:58:53 keyring_linux -- nvmf/common.sh@704 -- # digest=0 01:02:11.866 03:58:53 keyring_linux -- nvmf/common.sh@705 -- # python - 01:02:11.866 03:58:53 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 01:02:11.866 03:58:53 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 01:02:11.866 /tmp/:spdk-test:key0 01:02:11.866 03:58:53 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 01:02:11.866 03:58:53 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 01:02:11.866 03:58:53 keyring_linux -- keyring/common.sh@17 -- # name=key1 01:02:11.866 03:58:53 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 01:02:11.866 03:58:53 keyring_linux -- keyring/common.sh@17 -- # digest=0 01:02:11.866 03:58:53 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 01:02:11.866 03:58:53 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 01:02:11.866 03:58:53 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 01:02:11.866 03:58:53 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 01:02:11.866 03:58:53 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 01:02:11.866 03:58:53 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 01:02:11.866 03:58:53 keyring_linux -- nvmf/common.sh@704 -- # digest=0 01:02:11.866 03:58:53 keyring_linux -- nvmf/common.sh@705 -- # python - 01:02:11.866 03:58:53 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 01:02:11.866 03:58:53 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 01:02:11.866 /tmp/:spdk-test:key1 01:02:11.866 03:58:53 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2456661 01:02:11.866 03:58:53 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2456661 01:02:11.866 03:58:53 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 2456661 ']' 01:02:11.866 03:58:53 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 01:02:11.866 03:58:53 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 01:02:11.866 03:58:53 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 01:02:11.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 01:02:11.866 03:58:53 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 01:02:11.866 03:58:53 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 01:02:11.866 03:58:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:02:11.866 [2024-06-11 03:58:53.153740] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 01:02:11.866 [2024-06-11 03:58:53.153789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2456661 ] 01:02:11.866 EAL: No free 2048 kB hugepages reported on node 1 01:02:11.866 [2024-06-11 03:58:53.210273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:11.866 [2024-06-11 03:58:53.250669] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 01:02:12.125 03:58:53 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 01:02:12.125 03:58:53 keyring_linux -- common/autotest_common.sh@863 -- # return 0 01:02:12.125 03:58:53 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 01:02:12.125 03:58:53 keyring_linux -- common/autotest_common.sh@560 -- # xtrace_disable 01:02:12.125 03:58:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:02:12.125 [2024-06-11 03:58:53.434430] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 01:02:12.125 null0 01:02:12.125 [2024-06-11 03:58:53.466485] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 01:02:12.125 [2024-06-11 03:58:53.466827] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 01:02:12.125 03:58:53 keyring_linux -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 01:02:12.125 03:58:53 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 01:02:12.125 565208922 01:02:12.125 03:58:53 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 01:02:12.125 4795168 01:02:12.125 03:58:53 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2456670 01:02:12.125 03:58:53 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2456670 /var/tmp/bperf.sock 01:02:12.125 03:58:53 keyring_linux -- common/autotest_common.sh@830 -- # '[' -z 2456670 ']' 01:02:12.125 03:58:53 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 01:02:12.125 03:58:53 keyring_linux -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bperf.sock 01:02:12.125 03:58:53 keyring_linux -- common/autotest_common.sh@835 -- # local max_retries=100 01:02:12.125 03:58:53 keyring_linux -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 01:02:12.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 01:02:12.125 03:58:53 keyring_linux -- common/autotest_common.sh@839 -- # xtrace_disable 01:02:12.125 03:58:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:02:12.383 [2024-06-11 03:58:53.531756] Starting SPDK v24.09-pre git sha1 5f5c52753 / DPDK 22.11.4 initialization... 01:02:12.383 [2024-06-11 03:58:53.531798] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2456670 ] 01:02:12.383 EAL: No free 2048 kB hugepages reported on node 1 01:02:12.383 [2024-06-11 03:58:53.590583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 01:02:12.383 [2024-06-11 03:58:53.631121] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 01:02:12.383 03:58:53 keyring_linux -- common/autotest_common.sh@859 -- # (( i == 0 )) 01:02:12.383 03:58:53 keyring_linux -- common/autotest_common.sh@863 -- # return 0 01:02:12.383 03:58:53 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 01:02:12.383 03:58:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 01:02:12.641 03:58:53 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 01:02:12.641 03:58:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 01:02:12.641 03:58:54 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:02:12.641 03:58:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 01:02:12.899 [2024-06-11 03:58:54.187218] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 01:02:12.899 nvme0n1 01:02:12.899 03:58:54 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 01:02:12.899 03:58:54 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 01:02:12.899 03:58:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:02:12.899 03:58:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:02:12.899 03:58:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:12.899 03:58:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:02:13.157 03:58:54 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 01:02:13.157 03:58:54 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:02:13.157 03:58:54 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 01:02:13.157 03:58:54 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 01:02:13.157 03:58:54 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 01:02:13.157 03:58:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:13.157 03:58:54 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 01:02:13.415 03:58:54 keyring_linux -- keyring/linux.sh@25 -- # sn=565208922 01:02:13.415 03:58:54 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 01:02:13.415 03:58:54 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:02:13.415 03:58:54 keyring_linux -- keyring/linux.sh@26 -- # [[ 565208922 == \5\6\5\2\0\8\9\2\2 ]] 01:02:13.415 03:58:54 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 565208922 01:02:13.415 03:58:54 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 01:02:13.415 03:58:54 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 01:02:13.415 Running I/O for 1 seconds... 01:02:14.350 01:02:14.350 Latency(us) 01:02:14.350 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:14.350 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 01:02:14.350 nvme0n1 : 1.01 15981.17 62.43 0.00 0.00 7976.67 6772.05 15915.89 01:02:14.350 =================================================================================================================== 01:02:14.350 Total : 15981.17 62.43 0.00 0.00 7976.67 6772.05 15915.89 01:02:14.350 0 01:02:14.350 03:58:55 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 01:02:14.350 03:58:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 01:02:14.608 03:58:55 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 01:02:14.608 03:58:55 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 01:02:14.608 03:58:55 keyring_linux -- keyring/linux.sh@20 -- # local sn 01:02:14.608 03:58:55 keyring_linux -- keyring/linux.sh@22 -- # jq length 01:02:14.608 03:58:55 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 01:02:14.608 03:58:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 01:02:14.866 03:58:56 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 01:02:14.866 03:58:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 01:02:14.866 03:58:56 keyring_linux -- keyring/linux.sh@23 -- # return 01:02:14.866 03:58:56 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:02:14.866 03:58:56 keyring_linux -- common/autotest_common.sh@649 -- # local es=0 01:02:14.866 03:58:56 keyring_linux -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:02:14.866 03:58:56 keyring_linux -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 01:02:14.866 03:58:56 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 01:02:14.866 03:58:56 keyring_linux -- common/autotest_common.sh@641 -- # type -t bperf_cmd 01:02:14.866 03:58:56 keyring_linux -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 01:02:14.866 03:58:56 keyring_linux -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:02:14.866 03:58:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 01:02:14.866 [2024-06-11 03:58:56.265510] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 01:02:14.866 [2024-06-11 03:58:56.265916] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x111e170 (107): Transport endpoint is not connected 01:02:14.866 [2024-06-11 03:58:56.266912] nvme_tcp.c:2180:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x111e170 (9): Bad file descriptor 01:02:14.866 [2024-06-11 03:58:56.267912] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 01:02:14.866 [2024-06-11 03:58:56.267923] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 01:02:14.866 [2024-06-11 03:58:56.267929] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 01:02:15.125 request: 01:02:15.125 { 01:02:15.125 "name": "nvme0", 01:02:15.125 "trtype": "tcp", 01:02:15.125 "traddr": "127.0.0.1", 01:02:15.125 "hostnqn": "nqn.2016-06.io.spdk:host0", 01:02:15.125 "adrfam": "ipv4", 01:02:15.125 "trsvcid": "4420", 01:02:15.125 "subnqn": "nqn.2016-06.io.spdk:cnode0", 01:02:15.125 "psk": ":spdk-test:key1", 01:02:15.125 "method": "bdev_nvme_attach_controller", 01:02:15.125 "req_id": 1 01:02:15.125 } 01:02:15.125 Got JSON-RPC error response 01:02:15.125 response: 01:02:15.125 { 01:02:15.125 "code": -5, 01:02:15.125 "message": "Input/output error" 01:02:15.125 } 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@652 -- # es=1 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@660 -- # (( es > 128 )) 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@671 -- # [[ -n '' ]] 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@676 -- # (( !es == 0 )) 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@1 -- # cleanup 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@33 -- # sn=565208922 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 565208922 01:02:15.125 1 links removed 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@33 -- # sn=4795168 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 4795168 01:02:15.125 1 links removed 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2456670 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 2456670 ']' 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 2456670 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@954 -- # uname 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2456670 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_1 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2456670' 01:02:15.125 killing process with pid 2456670 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@968 -- # kill 2456670 01:02:15.125 Received shutdown signal, test time was about 1.000000 seconds 01:02:15.125 01:02:15.125 Latency(us) 01:02:15.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 01:02:15.125 =================================================================================================================== 01:02:15.125 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@973 -- # wait 2456670 01:02:15.125 03:58:56 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2456661 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@949 -- # '[' -z 2456661 ']' 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@953 -- # kill -0 2456661 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@954 -- # uname 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 01:02:15.125 03:58:56 keyring_linux -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 2456661 01:02:15.383 03:58:56 keyring_linux -- common/autotest_common.sh@955 -- # process_name=reactor_0 01:02:15.383 03:58:56 keyring_linux -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 01:02:15.383 03:58:56 keyring_linux -- common/autotest_common.sh@967 -- # echo 'killing process with pid 2456661' 01:02:15.383 killing process with pid 2456661 01:02:15.383 03:58:56 keyring_linux -- common/autotest_common.sh@968 -- # kill 2456661 01:02:15.384 03:58:56 keyring_linux -- common/autotest_common.sh@973 -- # wait 2456661 01:02:15.642 01:02:15.642 real 0m3.918s 01:02:15.642 user 0m6.690s 01:02:15.642 sys 0m1.503s 01:02:15.642 03:58:56 keyring_linux -- common/autotest_common.sh@1125 -- # xtrace_disable 01:02:15.642 03:58:56 keyring_linux -- common/autotest_common.sh@10 -- # set +x 01:02:15.643 ************************************ 01:02:15.643 END TEST keyring_linux 01:02:15.643 ************************************ 01:02:15.643 03:58:56 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 01:02:15.643 03:58:56 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 01:02:15.643 03:58:56 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 01:02:15.643 03:58:56 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 01:02:15.643 03:58:56 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 01:02:15.643 03:58:56 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 01:02:15.643 03:58:56 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 01:02:15.643 03:58:56 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 01:02:15.643 03:58:56 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 01:02:15.643 03:58:56 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 01:02:15.643 03:58:56 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 01:02:15.643 03:58:56 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 01:02:15.643 03:58:56 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 01:02:15.643 03:58:56 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 01:02:15.643 03:58:56 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 01:02:15.643 03:58:56 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 01:02:15.643 03:58:56 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 01:02:15.643 03:58:56 -- common/autotest_common.sh@723 -- # xtrace_disable 01:02:15.643 03:58:56 -- common/autotest_common.sh@10 -- # set +x 01:02:15.643 03:58:56 -- spdk/autotest.sh@383 -- # autotest_cleanup 01:02:15.643 03:58:56 -- common/autotest_common.sh@1391 -- # local autotest_es=0 01:02:15.643 03:58:56 -- common/autotest_common.sh@1392 -- # xtrace_disable 01:02:15.643 03:58:56 -- common/autotest_common.sh@10 -- # set +x 01:02:19.835 INFO: APP EXITING 01:02:19.835 INFO: killing all VMs 01:02:19.835 INFO: killing vhost app 01:02:19.835 INFO: EXIT DONE 01:02:22.365 0000:5f:00.0 (8086 0a54): Already using the nvme driver 01:02:22.365 0000:00:04.7 (8086 2021): Already using the ioatdma driver 01:02:22.365 0000:00:04.6 (8086 2021): Already using the ioatdma driver 01:02:22.365 0000:00:04.5 (8086 2021): Already using the ioatdma driver 01:02:22.365 0000:00:04.4 (8086 2021): Already using the ioatdma driver 01:02:22.365 0000:00:04.3 (8086 2021): Already using the ioatdma driver 01:02:22.365 0000:00:04.2 (8086 2021): Already using the ioatdma driver 01:02:22.365 0000:00:04.1 (8086 2021): Already using the ioatdma driver 01:02:22.365 0000:00:04.0 (8086 2021): Already using the ioatdma driver 01:02:22.365 0000:80:04.7 (8086 2021): Already using the ioatdma driver 01:02:22.365 0000:80:04.6 (8086 2021): Already using the ioatdma driver 01:02:22.365 0000:80:04.5 (8086 2021): Already using the ioatdma driver 01:02:22.365 0000:80:04.4 (8086 2021): Already using the ioatdma driver 01:02:22.365 0000:80:04.3 (8086 2021): Already using the ioatdma driver 01:02:22.365 0000:80:04.2 (8086 2021): Already using the ioatdma driver 01:02:22.365 0000:80:04.1 (8086 2021): Already using the ioatdma driver 01:02:22.365 0000:80:04.0 (8086 2021): Already using the ioatdma driver 01:02:25.647 Cleaning 01:02:25.648 Removing: /var/run/dpdk/spdk0/config 01:02:25.648 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 01:02:25.648 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 01:02:25.648 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 01:02:25.648 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 01:02:25.648 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 01:02:25.648 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 01:02:25.648 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 01:02:25.648 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 01:02:25.648 Removing: /var/run/dpdk/spdk0/fbarray_memzone 01:02:25.648 Removing: /var/run/dpdk/spdk0/hugepage_info 01:02:25.648 Removing: /var/run/dpdk/spdk1/config 01:02:25.648 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 01:02:25.648 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 01:02:25.648 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 01:02:25.648 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 01:02:25.648 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 01:02:25.648 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 01:02:25.648 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 01:02:25.648 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 01:02:25.648 Removing: /var/run/dpdk/spdk1/fbarray_memzone 01:02:25.648 Removing: /var/run/dpdk/spdk1/hugepage_info 01:02:25.648 Removing: /var/run/dpdk/spdk1/mp_socket 01:02:25.648 Removing: /var/run/dpdk/spdk2/config 01:02:25.648 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 01:02:25.648 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 01:02:25.648 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 01:02:25.648 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 01:02:25.648 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 01:02:25.648 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 01:02:25.648 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 01:02:25.648 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 01:02:25.648 Removing: /var/run/dpdk/spdk2/fbarray_memzone 01:02:25.648 Removing: /var/run/dpdk/spdk2/hugepage_info 01:02:25.648 Removing: /var/run/dpdk/spdk3/config 01:02:25.648 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 01:02:25.648 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 01:02:25.648 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 01:02:25.648 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 01:02:25.648 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 01:02:25.648 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 01:02:25.648 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 01:02:25.648 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 01:02:25.648 Removing: /var/run/dpdk/spdk3/fbarray_memzone 01:02:25.648 Removing: /var/run/dpdk/spdk3/hugepage_info 01:02:25.648 Removing: /var/run/dpdk/spdk4/config 01:02:25.648 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 01:02:25.648 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 01:02:25.648 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 01:02:25.648 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 01:02:25.648 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 01:02:25.648 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 01:02:25.648 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 01:02:25.648 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 01:02:25.648 Removing: /var/run/dpdk/spdk4/fbarray_memzone 01:02:25.648 Removing: /var/run/dpdk/spdk4/hugepage_info 01:02:25.648 Removing: /dev/shm/bdev_svc_trace.1 01:02:25.648 Removing: /dev/shm/nvmf_trace.0 01:02:25.648 Removing: /dev/shm/spdk_tgt_trace.pid1972745 01:02:25.648 Removing: /var/run/dpdk/spdk0 01:02:25.648 Removing: /var/run/dpdk/spdk1 01:02:25.648 Removing: /var/run/dpdk/spdk2 01:02:25.648 Removing: /var/run/dpdk/spdk3 01:02:25.648 Removing: /var/run/dpdk/spdk4 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1970393 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1971457 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1972745 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1973159 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1974100 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1974313 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1975311 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1975317 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1975549 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1977170 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1978431 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1978712 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1978998 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1979292 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1979378 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1979615 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1979861 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1980141 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1980896 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1983856 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1984120 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1984190 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1984381 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1984657 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1984718 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1985158 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1985167 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1985498 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1985646 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1985808 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1985910 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1986275 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1986503 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1986790 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1987048 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1987075 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1987353 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1987602 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1987838 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1988078 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1988299 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1988521 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1988758 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1989003 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1989247 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1989479 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1989700 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1989924 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1990154 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1990383 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1990624 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1990880 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1991125 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1991375 01:02:25.648 Removing: /var/run/dpdk/spdk_pid1991636 01:02:25.906 Removing: /var/run/dpdk/spdk_pid1991885 01:02:25.906 Removing: /var/run/dpdk/spdk_pid1992132 01:02:25.906 Removing: /var/run/dpdk/spdk_pid1992414 01:02:25.906 Removing: /var/run/dpdk/spdk_pid1992504 01:02:25.906 Removing: /var/run/dpdk/spdk_pid1996607 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2080037 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2084571 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2094841 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2100306 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2104585 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2105274 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2117354 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2117421 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2118273 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2119184 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2120097 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2120564 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2120572 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2120802 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2121028 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2121031 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2121946 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2122647 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2123689 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2124264 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2124368 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2124883 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2125992 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2126968 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2135362 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2135791 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2140140 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2146279 01:02:25.906 Removing: /var/run/dpdk/spdk_pid2148771 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2159396 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2169017 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2170802 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2171847 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2189756 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2193830 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2218619 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2223274 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2224982 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2226612 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2226801 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2226851 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2226938 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2227399 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2229215 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2229930 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2230243 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2232335 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2232824 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2233324 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2237879 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2243531 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2248540 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2285743 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2289608 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2296408 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2297701 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2299027 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2303589 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2307718 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2315822 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2315828 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2320679 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2320835 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2321069 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2321520 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2321525 01:02:25.907 Removing: /var/run/dpdk/spdk_pid2323370 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2324968 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2326567 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2328168 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2329765 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2331483 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2337724 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2338333 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2340775 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2341811 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2348511 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2351047 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2356709 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2362102 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2370719 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2378213 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2378256 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2397845 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2398316 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2398786 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2399316 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2400000 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2400474 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2401044 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2401632 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2405948 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2406180 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2412520 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2412792 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2415012 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2422802 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2422814 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2428432 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2430412 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2432703 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2433914 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2435886 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2436946 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2446141 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2446602 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2447068 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2449651 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2450117 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2450583 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2454594 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2454599 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2456115 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2456661 01:02:26.164 Removing: /var/run/dpdk/spdk_pid2456670 01:02:26.164 Clean 01:02:26.164 03:59:07 -- common/autotest_common.sh@1450 -- # return 0 01:02:26.164 03:59:07 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 01:02:26.164 03:59:07 -- common/autotest_common.sh@729 -- # xtrace_disable 01:02:26.164 03:59:07 -- common/autotest_common.sh@10 -- # set +x 01:02:26.422 03:59:07 -- spdk/autotest.sh@386 -- # timing_exit autotest 01:02:26.422 03:59:07 -- common/autotest_common.sh@729 -- # xtrace_disable 01:02:26.422 03:59:07 -- common/autotest_common.sh@10 -- # set +x 01:02:26.422 03:59:07 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 01:02:26.422 03:59:07 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 01:02:26.422 03:59:07 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 01:02:26.422 03:59:07 -- spdk/autotest.sh@391 -- # hash lcov 01:02:26.422 03:59:07 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 01:02:26.422 03:59:07 -- spdk/autotest.sh@393 -- # hostname 01:02:26.422 03:59:07 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-05 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 01:02:26.422 geninfo: WARNING: invalid characters removed from testname! 01:02:48.379 03:59:27 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:02:48.379 03:59:29 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:02:50.279 03:59:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:02:52.189 03:59:33 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:02:54.086 03:59:35 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:02:55.457 03:59:36 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 01:02:57.356 03:59:38 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 01:02:57.356 03:59:38 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 01:02:57.356 03:59:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 01:02:57.356 03:59:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 01:02:57.356 03:59:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 01:02:57.356 03:59:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:57.356 03:59:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:57.356 03:59:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:57.356 03:59:38 -- paths/export.sh@5 -- $ export PATH 01:02:57.356 03:59:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 01:02:57.356 03:59:38 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 01:02:57.356 03:59:38 -- common/autobuild_common.sh@437 -- $ date +%s 01:02:57.356 03:59:38 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718071178.XXXXXX 01:02:57.356 03:59:38 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718071178.QPVPis 01:02:57.356 03:59:38 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 01:02:57.356 03:59:38 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 01:02:57.356 03:59:38 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 01:02:57.356 03:59:38 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 01:02:57.356 03:59:38 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 01:02:57.356 03:59:38 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 01:02:57.356 03:59:38 -- common/autobuild_common.sh@453 -- $ get_config_params 01:02:57.356 03:59:38 -- common/autotest_common.sh@396 -- $ xtrace_disable 01:02:57.356 03:59:38 -- common/autotest_common.sh@10 -- $ set +x 01:02:57.357 03:59:38 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 01:02:57.357 03:59:38 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 01:02:57.357 03:59:38 -- pm/common@17 -- $ local monitor 01:02:57.357 03:59:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:02:57.357 03:59:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:02:57.357 03:59:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:02:57.357 03:59:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:02:57.357 03:59:38 -- pm/common@25 -- $ sleep 1 01:02:57.357 03:59:38 -- pm/common@21 -- $ date +%s 01:02:57.357 03:59:38 -- pm/common@21 -- $ date +%s 01:02:57.357 03:59:38 -- pm/common@21 -- $ date +%s 01:02:57.357 03:59:38 -- pm/common@21 -- $ date +%s 01:02:57.357 03:59:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718071178 01:02:57.357 03:59:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718071178 01:02:57.357 03:59:38 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718071178 01:02:57.357 03:59:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718071178 01:02:57.357 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718071178_collect-vmstat.pm.log 01:02:57.357 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718071178_collect-cpu-load.pm.log 01:02:57.357 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718071178_collect-cpu-temp.pm.log 01:02:57.357 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718071178_collect-bmc-pm.bmc.pm.log 01:02:58.292 03:59:39 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 01:02:58.292 03:59:39 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 01:02:58.292 03:59:39 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:02:58.292 03:59:39 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 01:02:58.292 03:59:39 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 01:02:58.292 03:59:39 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 01:02:58.292 03:59:39 -- spdk/autopackage.sh@19 -- $ timing_finish 01:02:58.292 03:59:39 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:02:58.292 03:59:39 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 01:02:58.292 03:59:39 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 01:02:58.292 03:59:39 -- spdk/autopackage.sh@20 -- $ exit 0 01:02:58.292 03:59:39 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 01:02:58.292 03:59:39 -- pm/common@29 -- $ signal_monitor_resources TERM 01:02:58.292 03:59:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 01:02:58.292 03:59:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:02:58.292 03:59:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 01:02:58.292 03:59:39 -- pm/common@44 -- $ pid=2467732 01:02:58.292 03:59:39 -- pm/common@50 -- $ kill -TERM 2467732 01:02:58.292 03:59:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:02:58.292 03:59:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 01:02:58.292 03:59:39 -- pm/common@44 -- $ pid=2467733 01:02:58.292 03:59:39 -- pm/common@50 -- $ kill -TERM 2467733 01:02:58.292 03:59:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:02:58.292 03:59:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 01:02:58.292 03:59:39 -- pm/common@44 -- $ pid=2467735 01:02:58.292 03:59:39 -- pm/common@50 -- $ kill -TERM 2467735 01:02:58.292 03:59:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:02:58.292 03:59:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 01:02:58.292 03:59:39 -- pm/common@44 -- $ pid=2467758 01:02:58.292 03:59:39 -- pm/common@50 -- $ sudo -E kill -TERM 2467758 01:02:58.552 + [[ -n 1850042 ]] 01:02:58.552 + sudo kill 1850042 01:02:58.579 [Pipeline] } 01:02:58.595 [Pipeline] // stage 01:02:58.601 [Pipeline] } 01:02:58.617 [Pipeline] // timeout 01:02:58.623 [Pipeline] } 01:02:58.640 [Pipeline] // catchError 01:02:58.646 [Pipeline] } 01:02:58.664 [Pipeline] // wrap 01:02:58.670 [Pipeline] } 01:02:58.684 [Pipeline] // catchError 01:02:58.693 [Pipeline] stage 01:02:58.694 [Pipeline] { (Epilogue) 01:02:58.708 [Pipeline] catchError 01:02:58.710 [Pipeline] { 01:02:58.723 [Pipeline] echo 01:02:58.724 Cleanup processes 01:02:58.730 [Pipeline] sh 01:02:59.013 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:02:59.013 2467853 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 01:02:59.013 2468131 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:02:59.034 [Pipeline] sh 01:02:59.315 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 01:02:59.315 ++ grep -v 'sudo pgrep' 01:02:59.315 ++ awk '{print $1}' 01:02:59.315 + sudo kill -9 2467853 01:02:59.328 [Pipeline] sh 01:02:59.612 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:03:09.603 [Pipeline] sh 01:03:09.905 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:03:09.905 Artifacts sizes are good 01:03:09.933 [Pipeline] archiveArtifacts 01:03:09.940 Archiving artifacts 01:03:10.147 [Pipeline] sh 01:03:10.434 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 01:03:10.448 [Pipeline] cleanWs 01:03:10.458 [WS-CLEANUP] Deleting project workspace... 01:03:10.458 [WS-CLEANUP] Deferred wipeout is used... 01:03:10.465 [WS-CLEANUP] done 01:03:10.467 [Pipeline] } 01:03:10.488 [Pipeline] // catchError 01:03:10.500 [Pipeline] sh 01:03:10.784 + logger -p user.info -t JENKINS-CI 01:03:10.793 [Pipeline] } 01:03:10.809 [Pipeline] // stage 01:03:10.815 [Pipeline] } 01:03:10.832 [Pipeline] // node 01:03:10.837 [Pipeline] End of Pipeline 01:03:10.897 Finished: SUCCESS